1c built-in language stack overflow. Stack Overflow

  • 10.01.2022

04/14/2016 Version 3.22 The interface has been changed, errors when transferring registers have been corrected, the procedure for transferring an organization and accounting policies has been changed. Platform 8.3.7.2027 BP 3.0.43.174
03/17/2016 Version 3.24 Noticed errors have been corrected. Platform 8.3.8.1747 BP 3.0.43.241
06/16/2016 Version 3.26 The noticed errors have been corrected. Platform 8.3.8.2088 BP 3.0.44.123
10/16/2016 Version 4.0.1.2 Fixed transfer of value storage, changed transfer of accounting policy for releases 3.44.*. Platform 8.3.9.1818 BP 3.0.44.164.
04/19/2017 Version 4.0.2.7 The algorithm for transferring registers associated with directories has been changed, noticed errors have been corrected, transfer with overwriting of links has been fixed.
05/29/2017 Version 4.0.4.5 Changed the transfer of movements, added viewing of the movements of transferred documents, something else...
05/30/2017 Version 4.0.4.6 Fixed an error when filling out the list of existing directories in the source (thanks shoy)
06/17/2017 Version 4.0.5.1 The algorithm for transferring movements has been changed.
07/19/2017 Version 4.0.5.4 The transfer of CI from BP 2.0 has been changed. Unexpectedly, the transfer from UT 10.3 was carried out by Smilegm, in this version the transfer was slightly corrected for this situation)))
08/10/2017 Version 4.0.5.5 Fixed errors when transferring from BP 2.0
09.19.2017 Version 4.4.5.7 Fixed connection check for 3.0.52.*
11/28/2017 Version 4.4.5.9 Fixed reported errors
12/06/2017 Version 5.2.0.4 The link search algorithm has been redesigned. Transfer procedures from BP 1.6 have been added; there is no longer a strict connection to the BP - you can easily use it to transfer data of “almost” identical configurations. I will try to correct all comments promptly.
12/08/2017 Version 5.2.1.3 Added an algorithm for transferring salary statements from BP.2.0 to BP 3.0. Included changes for sharing between identical configurations.
12/19/2017 Version 5.2.2.2 The transfer of independent information registers for directories that are in the dimensions of these registers has been adjusted.

12/06/2017 New processing version 5.2.0.4. Among the significant changes is the ability to transfer from BP 1.6 to BP 3.0. The main change is the management of the search for directory links - in previous versions the search was by GUID, but in this version you can enable the search "By details":

01/17/2018 Version 5.2.2.3 Fixed - noticed errors in subordinate directories and periodic information registers.

07/19/2018 Version 5.2.2.8 Noticed errors have been corrected.

in which you can set search details for any directory. This regime itself “emerged” at the numerous requests of workers, for cases when an exchange is needed into an already existing database that already has data (for example, to merge accounting records for two organizations into one database).

12/21/2015 Platform 8.3.7.1805 and BP 3.0.43.29 were released, respectively a new version processing 3.1:-) (description below). New functionality - the ability to compare balances and turnover between two BP databases (for all accounts, if the charts of accounts coincide, or for individual matching accounting accounts, with or without analytics).
01/03/2016 Version 3.5 - the mechanism for connecting to the source base has been changed - brought into compliance with BSP 2.3.2.43. Minor bugs fixed. Platform 8.3.7.1845, BP 3.0.43.50
02/16/2016 Version 3.6 - Added the "Set manual correction" flag for documents transferred with movements. Fixed transfer of movements - documents with a date less than the beginning of the period are transferred without movements. Platform 8.3.7.1917, BP 3.0.43.116
03/22/2016 Version 3.10 - Added the "Always overwrite references" flag for mandatory rewriting of referenced objects (the transfer speed is significantly reduced, but sometimes it is necessary). The "Preparation" tab has been added, where you can configure the correspondence of the source and destination charts of accounts (at the same level as account codes) and the transfer of constants. Platform 8.3.7.1970, BP 3.0.43.148

04/03/2016 Version 3.11 The filling of the list of documents existing in the source has been changed: it was filled in by movements according to the chart of accounts, it was done simply by links for the period, just like in //site/public/509628/

Processing is intended for transferring data for any period in the same way as “Uploading MXL” with ITS, only without using XML, JSON and other intermediate files - exchange from database to database via COM. In versions older than 3.10, a connection is used using an algorithm from the BSP, which provides for the registration of comcntr.dll (if the OS “allows”), as well as various messages when it is impossible to establish a connection, for example - “The information base is in the process of updating,” etc. . Added check for selecting a receiver as an IS source - a warning is issued.

Can be used for:

1. Transfer of regulatory reference information (RNI) from the IS source to the IS destination (the transfer of all reference information is carried out at the user’s request, the necessary reference books, etc. are transferred via links during any transfers).

2. Transfer of documents for any selected period.

3. Transfer of all information from a “broken” information security system if it is launched in 1C:Enterprise mode, and uploading data or launching the Configurator is impossible.

Feature of the processing - the information security of the receiver and the source may be different; transfer from 2.0 to 3.0 - the editions are different, but the transfer works!!! Mismatched details are ignored, or transfer algorithms must be specified for them.

Comment: Data conversion is NOT USED! And don't ask why!!! For those who are especially picky - BP 3.0 changes almost every day, there is no longer any strength to keep the transfer rules up to date - everything is simpler here :-).

Another feature of the processing is that it is launched in the receiver’s information security (the analogues closest in functionality work the other way around - from the source to the receiver).

Getting started - you need to specify the processing period, specify the organization from the source, it will be transferred to the destination.

When transferring an organization, the accounting policies and “related” information registers are transferred. Therefore, when you first select an organization in the source, some time will pass before it appears in the receiver.

The charts of accounts of the source and destination must be the same, no different accounts in versions 2.* are transferred to the destination, it is planned to enable the adjustment of matching accounts and analytics in the future. Accounts are transferred using codes that are not found in the receiver CANNOT BE CREATE!!!

The remaining objects are transferred using internal identifiers (GUID), so you should pay attention to some key directories, for example - Currencies.

If you plan to exchange with a “clean” database, then it is better to delete the directories filled in during the first launch before the exchange. Why is there a page in the processing where you can get these directory elements and delete them. At a minimum, you need to remove the "rub" currency. - because duplication is almost inevitable (in principle, this is easily corrected after sharing the search and replacement of duplicates built into BP 3.0).

The processing provides for calling the directory deletion page when the initial filling form is open:

When you open processing, a page for deleting directories filled in during initial filling will be displayed:

Since version 3.22, the interface has been changed, now all preparatory operations are tabbed and always available


It is important to check the correspondence of the Chart of Accounts of the source and recipient and be sure to indicate the correspondence of the accounts.

There is no need to delete predefined directory elements - they are transferred by configuration identifiers (not GUIDs).

You can select objects for transfer using the selection form from directories and documents (information registers associated with these objects will be transferred automatically, so there is no need to select them separately).Transfer of registers, reduction is temporarily disabled - you need to develop a list of registers for transfer - something should be transferred, something should not, at this stage it is enough what is transferred in the directories, the list of registers for transfer will be in the template in future versions.

When exchanging with 2.0, some of the details (for example, Contact Information) is transferred according to the algorithm built into the processing, because for 2.0 and 3.0 they are stored differently. The situation is similar with a number of documents (for example, Debt Adjustment).

The list of object types can be filled in differently in version 3.22, this is placed in a submenu, the changes are indicated in the picture:

There is a simplification of the use of processing - you can not select directories for exchange, but simply fill the list of types in the receiver with only those types of directories that have at least one entry in the source.

The processing has a built-in layout that lists directories that do not need to be transferred from the source to the destination (the "Exclude from transfer" layout). You can add (remove) any directories to this layout. If you don’t need to transfer all the reference data, it’s enough to transfer the documents, a list of which can also be obtained without selecting types, simply fill in with all the source documents for which transactions exist.

The transfer of documents with movements is provided; for exchanges 3.0 to 3.0 and the correspondence of charts of accounts, it works one to one; when exchanging 2.0 to 3.0, errors are possible, so it is recommended to transfer documents without movements, and then simply transfer them to the receiver. When transferring documents with movements, the “Manual adjustment” flag is set.

The “Posted” attribute is set in the recipient documents in the same way as in the source, but movements (if they were not transferred) will appear only after documents are processed, for example, using the processing built into BP 3.0 Group posting of documents (recommended option), or from this processing (There is a “Post Documents” button here).

If the processing is planned to be used for permanent exchange, it can be registered in the receiver’s information security (the “Register” button). For “one-time” transfers, you can simply use it via File - Open.

12/21/2015 - Version 3.1 platform 8.3.7.1805 and power supply unit 3.0.43.29 (version 2.15 for 3.0.43.* does not work - the configuration has been changed quite a lot).

Changed:

Dialog for selecting a connection option, the Client-server flag is always available, depending on its setting, either the choice of a file database folder, or a field with the name of the database on the server and the name of the server itself is available (a bug in the dialog version 2.15 has been fixed)

- NEW FUNCTIONALITY: A mechanism for reconciling balances and turnover between the source and receiver databases in varying degrees of detail:


I think the choice of verification options is clear from the figure:


There are differences in use in the thin and thick clients - in the thick client, a file comparison window is immediately displayed:


In the thin client, I didn’t bother with programmatic pressing of buttons; I suggest a simple option for displaying a comparison window:


Comparison in a thin client, IMHO, is more convenient, because... has navigation buttons for differences, which is more convenient for large tables than scrolling with the mouse:

03/22/2016 Version 3.10 - Added the "Always overwrite references" flag for mandatory rewriting of referenced objects (the transfer speed is significantly reduced, but sometimes it is necessary). The "Preparation" tab has been added, where you can configure the correspondence of the source and destination charts of accounts (at the same level as account codes) and the transfer of constants. Platform 8.3.7.1970, BP 3.0.43.148

- NEW FUNCTIONALITY: Before transferring documents, it is recommended to check the chart of accounts for consistency in the source and destination, as well as compliance with the established constants.

For this purpose, a “Preparation” tab has been added in which you can set these correspondences:


The algorithm for filling out the account matching table is simple - the turnover existing in the source is analyzed, and for each account found there, a match is found in the receiver by code; if a match is not found, a line with the account code is displayed in the table, by which you need to select the receiver account, it will be used when transfer. Poke compliance is established at the code level.

To check and transfer the correspondence of the established constants, the corresponding table is used:

We fill it out and transfer it if necessary. Only constants marked with the flag are transferred...

The program stack is a special memory area organized according to the LIFO (Last in, first out) queue principle. The name "stack" comes from the analogy of the principle of its construction with a stack of plates - you can put plates on top of each other (the method of adding to the stack, "push", "push"), and then take them away, starting from the top (method of getting a value from the stack, "popping", "pop"). The program stack is also called the call stack, execution stack, or machine stack (so as not to confuse it with the "stack" - an abstract data structure).

What is a stack for? It allows you to conveniently organize the call of subroutines. When called, the function receives some arguments; it must also store its local variables somewhere. In addition, we must take into account that one function can call another function, which also needs to pass parameters and store its variables. Using the stack, when passing parameters you just need to put them on the stack, then the called function can “pop” them from there and use them. Local variables can also be stored there - at the beginning of its code, the function allocates part of the stack memory, and when control returns, it clears and frees it. Programmers in high-level languages ​​usually do not think about such things - the compiler generates all the necessary routine code for them.

Consequences of an error

Now we are almost close to the problem. In the abstract, a stack is an infinite store into which new items can be added endlessly. Unfortunately, in our world everything is finite - and stack memory is no exception. What happens if it ends when the function's arguments are pushed onto the stack? Or does the function allocate memory for its variables?

An error called a stack overflow will occur. Since the stack is needed to organize the calling of user-defined functions (and almost all programs in modern languages, including object-oriented ones, are built on the basis of functions in one way or another), they will no longer be able to be called. Therefore, the operating system takes control, clears the stack, and terminates the program. Here we can emphasize the difference between stack overflow and stack overflow - in the first case, an error occurs when accessing an incorrect memory area, and if there is no protection at this stage, it does not manifest itself at that moment - with a successful combination of circumstances, the program can work normally. If only the memory being accessed was protected, . In the case of a stack, the program certainly terminates.

To be completely precise, it should be noted that such a description of events is only true for compilers that compile to native code. In managed languages, the virtual machine has its own stack for managed programs, the state of which is much easier to monitor, and you can even afford to throw an exception to the program when an overflow occurs. In the C and C++ languages ​​you cannot count on such a “luxury”.

Reasons for the error

What could lead to such an unpleasant situation? Based on the mechanism described above, one possibility is that there are too many nested function calls. This scenario is especially likely when using recursion. Infinite recursion (in the absence of a lazy evaluation mechanism) is interrupted in this way, as opposed to , which sometimes has useful applications. However, with a small amount of memory allocated to the stack (which, for example, is typical for microcontrollers), a simple sequence of calls may be sufficient.

Another option is local variables, which require a lot of memory. Having a local array of a million elements, or a million local variables (you never know what happens) is not the best idea. Even one call to such a greedy function can easily cause a stack overflow. To obtain large amounts of data, it is better to use dynamic memory mechanisms, which will allow you to handle the error of its lack.

However, dynamic memory is quite slow in terms of allocation and deallocation (since the operating system handles this), and with direct access you have to manually allocate and deallocate it. Memory on the stack is allocated very quickly (in fact, you only need to change the value of one register); in addition, objects allocated on the stack have destructors automatically called when control of the function returns and the stack is cleared. Of course, the desire to get memory from the stack immediately arises. Therefore, the third way to overflow is the programmer's own allocation of memory on the stack. The C library provides the alloca function specifically for this purpose. It is interesting to note that if the function for allocating dynamic memory malloc has its own “twin” for freeing it, free, then the alloca function does not have it - the memory is freed automatically after control of the function returns. Perhaps this only complicates the situation - after all, it will not be possible to free the memory before exiting the function. Even though, according to the man page, "the alloca function is machine and compiler dependent; on many systems its implementation is problematic and buggy; its use is very frivolous and frowned upon" - it is still used.

Examples

As an example, let's look at the code for recursive file search located on MSDN:

Void DirSearch(String* sDir) ( try ( // Find the subfolders in the folder that is passed in. String* d = Directory::GetDirectories(sDir); int numDirs = d->get_Length(); for (int i= 0;i< numDirs; i++) { // Find all the files in the subfolder. String* f = Directory::GetFiles(d[i],textBox1->Text); int numFiles = f->get_Length(); for (int j=0; j< numFiles; j++) { listBox1->Items->Add(f[j]); ) DirSearch(d[i]); ) ) catch (System::Exception* e) ( MessageBox::Show(e->Message); ) )

This function gets a list of files in the specified directory, and then calls itself for those elements of the list that happen to be directories. Accordingly, with a sufficiently deep tree file system, we get a natural result.

An example of the second approach, taken from the question "Why does stack overflow happen?" from a site called Stack Overflow (the site is a collection of questions and answers on any programming topic, and not just Stack Overflow, as it might seem):

#define W 1000 #define H 1000 #define MAX 100000 //... int main() ( int image; float dtr; initImg(image,dtr); return 0; )

As you can see, the main function allocates memory on the stack for arrays of the int and float types, each with a million elements, which in total gives a little less than 8 megabytes. If you consider that by default Visual C++ reserves only 1 megabyte for the stack, then the answer becomes obvious.

And here is an example taken from the GitHub repository of the Lightspark Flash player project:

DefineSoundTag::DefineSoundTag(/* ... */) ( // ... unsigned int soundDataLength = h.getLength()-7; unsigned char *tmp = (unsigned char *)alloca(soundDataLength); // .. . )

You can hope that h.getLength()-7 is not too large a number so that there is no overflow on the next line. But is the time saved on memory allocation worth the “potential” crash of the program?

Bottom line

Stack overflow is a fatal error that most often affects programs containing recursive functions. However, even if the program does not contain such functions, an overflow is still possible due to the large size of local variables or an error in manual allocation of memory on the stack. All the classic rules remain in force: if there is a choice, it is better to choose iteration instead of recursion, and also do not do manual work instead of the compiler.

Bibliography

  • E. Tanenbaum. Computer architecture.
  • Wikipedia. Stack overflow.
  • Stack Overflow. Stack overflow C++.

The stack, in this context, is the last one in the first buffer that you allocate during the execution of your program. Last, First (LIFO) means that the last thing you put in is always the first thing you pop back - if you pop 2 items on the stack, "A" and then "B", then the first thing you pop off the stack will be "B" and the next thing will be "A".

When you call a function in your code, the next command after the function call is stored on the stack and any memory space that can be overwritten by the function call. The chosen function can use more stack for its own local variables. When this is done, it will free up the local variable space it was using and then return to the previous function.

Stack Overflow

A stack overflow is when you have used more memory on the stack than your program intended to use. On embedded systems, you may only have 256 bytes for the stack, and if each function takes 32 bytes, then you can only have 8 function calls to function 2 with deep function 1 function, which calls function 3, which calls function 4... .who calls function 8, which calls function 9, but function 9 overwrites memory outside the stack. This may overwrite memory, code, etc.

Many programmers make this mistake by calling function A, which then calls function B, which then calls function C, which then calls function A. It may work most of the time, but just one wrong input will cause it to circle forever until the computer fails finds out that the stack is full.

Recursive functions are also a cause of this, but if you are writing recursively (i.e. your function calls itself) then you need to be aware of this and use static/global variables to prevent endless recursion.

Typically, the OS and programming language you're using manages the stack, and it's out of your hands. You should look at your call graph (a tree structure that shows from your main point what each function calls) to see how deep your function calls are, and identify loops and recursion that are not intended. Intentional loops and recursion must be artificially error-checked if they call each other too many times.

Apart from good programming practices, static and dynamic testing, there is not much you can do in these high level systems.

Embedded Systems

In the embedded world, especially in high-assurance code (automotive, aerospace, aerospace), you do extensive testing and code review, but you also do the following:

  • Disable recursion and loops - policy and testing compliance
  • Keep code and stack far apart (code in flash, stack in RAM and will never match each other)
  • Place guard bars around the stack - an empty area of ​​memory that you fill with a magic number (usually the interrupt routine, but there are many variations here) and hundreds or thousands of times per second you look at the guard bars to make sure they haven't been overwritten.
  • Use memory protection (i.e. don't execute on the stack, don't read or write directly behind the stack)
  • Interrupts don't call secondary functions - they set flags, copy data, and let the application take care of handling it (otherwise you could end up 8 deep in your function call tree, have an interrupt, and then have a few more functions inside the interrupt exit, causing a throw). You have multiple call trees - one for the main processes and one for each interrupt. If your interruptions can interrupt each other... well, there are dragons...

High-level languages ​​and systems

But in high-level languages ​​running on operating systems:

  • Reduce local variable storage (local variables are stored on the stack), although compilers are quite smart about this and will sometimes put large chunks on the heap if your call tree is shallow)
  • Avoid or strictly limit recursion
  • Don't interrupt your programs too far into smaller and smaller functions - even without considering local variables, each function call consumes up to 64 bytes on the stack (32-bit processor, saving half of the processor's registers, flags, etc.).
  • Keep the call tree shallow (similar to the description above)

Web servers

It depends on the sandbox you have whether you can control or even see the stack. Chances are you can handle web servers like any other high-level language and operating system - it's largely out of your hands, but check the language and server stack you're using. For example, you can split the stack on your SQL server.

The Informix® DataBlade™ API Programmer's Guide is available for download. The "Managing Stack Space" section describes creating user-defined functions (UDRs). This article provides additional information and debugging tips.

The following information applies whether UDR is running on a user-defined virtual processor (VP) or on a VP CPU. A thread's stack can be moved to a user-defined virtual processor immediately before executing the UDR.

What size stack is allocated for UDR?

The size of the stack available to a UDR depends on how the UDR was created:

    using the STACK modifier, which allows the UDR to use its specially allocated stack,

    without the STACK modifier, which means that UDR will share the stack allocated by the server with the thread making the request. The stack size in this case will be determined by the value of the STACKSIZE parameter in the onconfig configuration file.

STACK modifier

The CREATE PROCEDURE or CREATE FUNCTION statements have an optional STACK modifier that allows you to specify the amount of stack space, in bytes, that is required to execute the UDR.

If you use the STACK modifier when creating a UDR, the server will allocate and deallocate stack space each time the UDR is executed. The actual available size is equal to the STACK value in bytes minus some overhead depending on the number of function arguments.

If the STACK value is less than the STACKSIZE parameter in the onconfig file (see next section), then the stack size allocated for the UDR will be automatically rounded up to the STACKSIZE value.

STACKSIZE configuration parameter

The onconfig configuration file includes a STACKSIZE parameter that specifies the default stack size for user threads.

If you do not specify STACK when creating a UDR, the server does not allocate additional stack space to execute that UDR. Instead, the UDR uses the stack space allocated to execute the request. The available stack size will depend on the overhead of executing the function at the SQL level.

The stack per thread is allocated once for the specific thread executing the request. Performance is better when the UDR shares one stack with a thread, since the server does not waste resources on allocating an additional stack for each UDR call. On the other hand, if the stack size used by the UDR approaches the STACKSIZE value, it may cause a stack overflow when calling the function as part of a complex query (in which case less stack space will be available for UDR execution).

Please note that you should not set the STACKSIZE value too high, as this will affect all user threads.

When is it necessary to control stack size?

YYou must manage stack space if the UDR makes recursive calls or if the UDR requires more stack space than is available by default in the request thread's stack (STACKSIZE).

There are two ways to increase the stack for UDR execution:

    Specify the STACK modifier when creating a UDR.

    Use mi_call() to make recursive calls (see the Informix DataBlade API Programmer's Guide for an example).

If you don't specify a size via STACK, and if you don't use mi_call() to increase the current stack, and if the UDR does something that requires a lot of stack space, it will cause a stack overflow.

Note that some mi_* functions add a new stack segment for their own execution. These segments are freed when returning to the calling UDR function.

What to do if something goes wrong?

Monitoring Stack Usage

The purpose of monitoring is to identify the specific UDR that is causing the stack overflow so that you can change the STACK value specifically for that particular UDR.

    Monitoring stack usage with "onstat -g sts" command

    Monitoring a session executing a SQL query using "onstat -g ses session_id"

After identifying an SQL query that ends on a stack overflow, you should determine stack usage by separately executing the UDR queries that are part of the original query.

You can dynamically set the STACK value for UDR. For example:

alter function MyFoo (lvarchar,lvarchar) with (add stack=131072);

After changing the STACK value, you should test the original request to ensure that it is now stable.

Increase STACKSIZE

Alternatively, try increasing the STACKSIZE value. Check if this solves the problem. (Don't forget to return the old value later).

If increasing STACKSIZE does not help, the problem is most likely memory corruption. Here are some suggestions:

    Enable memory scribble and memory pool checking. The "Debugging Problems" section in the Memory Allocation for UDRs article explains how to do this.

    Reconsider using mi_lvarchar . Particular attention should be paid to places where mi_lvarchar is passed to a function that expects to receive a null-terminated string as an argument.

    Reduce the number of CPU (or user) VPs to one to reproduce the problem faster.

mi_print_stack() -- Solaris

Informix Dynamic Server for Solaris OS includes a mi_print_stack() function that can be called in the UDR. By default, this function saves the stack frame to the following file:

/tmp/default.stack

You cannot change the name of the output file, but you can change its location by changing the value of the DBTEMP environment variable. Make sure that the $DBTEMP directory is writable by the informix user. Any errors encountered while executing mi_print_stack() are reported to $MSGPATH.

This feature is only available for OC Solaris.

Glossary

Terms and abbreviations used in this article:

UDRUser-Defined Routine
V.P.Virtual Processor

This article once again demonstrates that any set of security measures must cover all stages of implementation: development, deployment, system administration and, of course, organizational measures. In information systems, it is the “human factor” (including users) that is the main security threat. This set of measures must be reasonable and balanced: it does not make sense and it is unlikely that sufficient funds will be allocated to organize protection that exceeds the cost of the data itself.

Introduction

1C:Enterprise is the most common accounting system in Russia, but despite this, until version 8.0 its developers paid very little attention to security issues. Basically, of course, this was dictated by the price niche of the product and the focus on small businesses where there are no qualified IT specialists, and the possible cost of deploying and maintaining a secure system would be prohibitively expensive for the enterprise. With the release of version 8.0, the emphasis had to change: the cost of solutions increased significantly, the system became much more scalable and flexible - the requirements changed significantly. Whether the system has become sufficiently reliable and secure is a very individual question. The main information system of a modern enterprise must meet at least the following security requirements:

  • Quite low probability of system failure due to internal reasons.
  • Reliable user authorization and data protection from incorrect actions.
  • An effective system for assigning user rights.
  • Online backup and recovery system in case of failure.

Do solutions based on 1C:Enterprise 8.0 meet these requirements? There is no clear answer. Despite significant changes in the access control system, many unresolved issues remain. Depending on how the system is designed and configured, all these requirements may not be met or met to a sufficient extent for a given implementation, however, it is worth paying attention (and this is a significant consequence of the “youth” of the platform) that in order to fully fulfill the listed conditions it is necessary to apply truly Herculean efforts.

This article is intended for developers and implementers of solutions on the 1C:Enterprise platform, as well as system administrators of organizations where 1C:Enterprise is used, and describes some aspects of developing and configuring the client-server version of the system from the point of view of the organization information security. This article cannot be used as a replacement for documentation, but only points out some points that have not yet been reflected in it. And, of course, neither this article nor all the documentation will be able to reflect the complexity of the problem of building a secure information system, which at the same time must satisfy the conflicting requirements of security, performance, convenience and functionality.

Classification and terminology

The key subject of consideration in the article is information threats.

Information threat– the possibility of a situation where data will be read, copied, changed or blocked without authorization.

And, based on this definition, the article classifies information threats as follows:

  • Unauthorized destruction of data
  • Unauthorized change of data
  • Unauthorized copying of data
  • Unauthorized reading of data
  • Data unavailability

All threats are divided into intentional and unintentional. We will call a realized information threat incident. Features of the system are:

Vulnerabilities– features leading to incidents Protection measures– features that block the possibility of an incident

Basically, only those cases are considered, the likelihood of which is due to the use of the 1C: Enterprise 8.0 technological platform in the client-server version (further, in cases where this does not contradict the meaning of simply 1C or 1C 8.0). Let us define the following main roles in relation to the use of the system:

  • Operators– users who have rights to view and change data limited by an application role, but do not have administrative functions
  • System administrators– users who have administrative rights in the system, including administrative rights in the operating systems of the application server and MS SQL server, administrative rights in MS SQL, etc.
  • Information security administrators– users to whom certain administrative functions in the 1C information base are delegated (such as adding users, testing and fixing, backup, setting up an application solution, etc.)
  • System developers– users developing an application solution. In general, they may not have access to the working system.
  • Persons who do not have direct access to the system– users who are not delegated access rights to 1C, but who can, to one degree or another, influence the operation of the system (usually these are all users of the same Active Directory domain in which the system is installed). This category is considered primarily to identify potentially dangerous subjects in the system.
  • Automated administrative scripts– programs to which certain functions are delegated, designed to automatically perform certain actions (for example, import-export of data)

Two points should be noted here: firstly, this classification is very rough and does not take into account the divisions within each group - such a division will be created for some specific cases, and secondly, it is assumed that other persons cannot influence the operation of the system, which must be provided by means external to 1C.

Any security system must be designed with feasibility and cost of ownership in mind. In general, when developing and implementing an information system, it is necessary that the price of protecting the system corresponds to:

  • the value of the protected information;
  • costs of creating an incident (in case of a deliberate threat);
  • financial risks in case of an incident

It is pointless and harmful to organize a defense that is much more expensive than assessing its financial effectiveness. There are several methods for assessing the risks of information loss, but they are not discussed within the scope of this article. Another important aspect is maintaining a balance of often conflicting requirements for information security, system performance, convenience and ease of working with the system, speed of development and implementation, and other requirements for enterprise information systems.

Main features of the system information security mechanism

1C:Enterprise 8.0 comes in two versions: file and client-server. The file version cannot be considered to ensure the information security of the system for the following reasons:

  • Data and configuration are stored in a file that is readable and writable by all users of the system.
  • As will be shown below, system authorization is very easily bypassed.
  • The integrity of the system is ensured only by the kernel of the client part.

In the client-server version, MS SQL Server is used to store information, which provides:

  • More reliable data storage.
  • Isolation of files from direct access.
  • More advanced transaction and locking mechanisms.

Despite the significant differences between the file and client-server versions of the system, they have a unified access control scheme at the application solution level, which provides the following capabilities:

  • User authorization using the password specified in 1C.
  • User authorization based on the current Windows user.
  • Assigning roles to system users.
  • Limiting administrative functions by role.
  • Assignment of available interfaces by roles.
  • Restricting access to metadata objects by role.
  • Restricting access to object details by role.
  • Restricting access to data objects by roles and session parameters.
  • Restricting interactive access to data and executable modules.
  • Some code execution restrictions.

In general, the data access scheme used is quite typical for information systems of this level. However, in relation to this implementation of a three-tier client-server architecture, there are several fundamental aspects that lead to a relatively large number of vulnerabilities:

  1. A large number of data processing stages, and at each stage different rules for accessing objects may apply.

    A somewhat simplified diagram of data processing stages that are significant from a security point of view is shown in Fig. 1. General rule for 1C is to reduce restrictions as you move down this scheme, therefore, using a vulnerability on one of upper levels can disrupt the system at all levels.

  2. Insufficiently established procedures for monitoring transmitted data when moving from level to level.

    Unfortunately, not all internal mechanisms of the system are perfectly debugged, especially for non-interactive mechanisms, the debugging of which is always more labor-intensive on the one hand, but more responsible on the other. This “disease” is not a problem exclusively with 1C; it is found in many server products from most vendors. Only in recent years has attention to these problems increased significantly.

  3. Insufficiently high average qualifications of developers and system administrators, inherited from the previous version.

    Products of the 1C:Enterprise line were initially focused on ease of development and support and on working in small organizations, so it is not surprising that historically it has developed that a significant part of the “developers” of application solutions and “administrators” of systems do not have sufficient knowledge and skills to work with a much more complex product, which is version 8.0. The problem is aggravated by the practice adopted by franchisee companies of teaching “in combat” at the expense of clients, without systematically approaching this issue. It is necessary to pay tribute to the 1C company that over the past few years this situation has been gradually corrected: serious franchisee companies have begun to take a more responsible approach to the problem of personnel selection and training, the level of information technology support from the 1C company has increased significantly, certification programs have appeared aimed at high level of service; but the situation cannot be corrected instantly, so this factor should be taken into account when analyzing the security of the system.

  4. The platform is relatively young.

    Among products of similar focus and purposes of use, this is one of the youngest solutions. The functionality of the platform was more or less established less than a year ago. At the same time, each release of the platform, starting with 8.0.10 (it was in this release that almost all the current capabilities of the system were implemented) became significantly more stable than the previous ones. The functionality of standard application solutions is still growing by leaps and bounds, although only half of the platform's capabilities are used. Of course, in such conditions we can talk about stability rather conditionally, but in general it must be recognized that in many respects solutions on the 1C 8.0 platform are significantly ahead in functionality and performance (and often in stability) of similar solutions on the 1C 7.7 platform.

So, the system (and, possibly, a standard application solution) is deployed in the enterprise and installed on computers. First of all, it is necessary to create an environment in which setting up 1C security makes sense, and for this it must be configured in such a way that the assumption that system security is significantly affected by system settings is fulfilled.

Follow the general rules for setting up security.

There can be no talk of any information security of a system if the basic principles of creating secure systems are not followed. Be sure to make sure that at least the following conditions are met:

  • Access to the servers is physically limited and their uninterrupted operation is ensured:
    • the server equipment meets reliability requirements, the replacement of faulty server equipment has been adjusted, for particularly critical areas, schemes with duplication of hardware are used (RAID, power from multiple sources, multiple communication channels, etc.);
    • the servers are located in a locked room, and this room is opened only for the duration of work that cannot be performed remotely;
    • Only one or two people have the right to open the server room; in case of an emergency, a notification system for responsible persons has been developed;
    • uninterrupted power supply to servers is ensured
    • normal climatic operating conditions of the equipment are ensured;
    • there is a fire alarm in the server room, there is no risk of flooding (especially for the first and last floors);
  • The settings of the network and information infrastructure of the enterprise are completed correctly:
    • Firewalls are installed and configured on all servers;
    • all users and computers are authorized on the network, passwords are complex enough that they cannot be guessed;
    • system operators have enough rights to work normally with it, but do not have rights to administrative actions;
    • anti-virus tools are installed and enabled on all computers on the network;
    • It is desirable that users (except network administrators) do not have administrative rights on client workstations;
    • access to the Internet and removable storage media should be regulated and limited;
    • system auditing of security events must be configured;
  • The main organizational issues have been resolved:
    • users have sufficient qualifications to work with 1C and hardware;
    • users are notified of responsibility for violating the operating rules;
    • financially responsible persons have been appointed for each material element of the information system;
    • all system units are sealed and closed;
    • Pay special attention to instructing and supervising cleaners, construction workers and electricians. These persons may, through negligence, cause damage that is not comparable to the intentional damage caused by an unscrupulous user of the system.

Attention! This list is not exhaustive, but only describes what is often missed when deploying any fairly complex and expensive information system!

  • MS SQL Server, application server and client part run on different computers, server applications run under the rights of specially created Windows users;
  • For MS SQL Server
    • mixed authorization mode is set
    • MS SQL users included in the serveradmin role do not participate in 1C work,
    • for each IB 1C a separate MS SQL user has been created that does not have privileged access to the server,
    • MS SQL user of one IS does not have access to other IS;
  • Users do not have direct access to application server and MS SQL server files
  • Operator workstations are equipped with Windows 2000/XP (not Windows 95/98/Me)

Do not neglect the recommendations of the system developers and reading the documentation. Important materials on setting up the system are published on ITS disks in the “Methodological Recommendations” section. Pay special attention to the following articles:

  1. Features of applications working with the 1C:Enterprise server
  2. Data placement 1C:Enterprise 8.0
  3. Update 1C:Enterprise 8.0 by users Microsoft Windows without administrator rights
  4. Editing the user list on behalf of a user who does not have administrative rights
  5. Configuring Windows XP SP2 Firewall Settings to Run SQL Server 2000 and SQL Server Desktop Engine (MSDE)
  6. Configuring COM+ Windows XP SP2 parameters for running the 1C:Enterprise 8.0 server
  7. Configuring Windows XP SP2 firewall settings for the 1C:Enterprise 8.0 server
  8. Configuring Windows XP SP2 firewall settings for the HASP License Manager
  9. Creating a Backup information base using SQL Server 2000
  10. Installation and configuration issues 1C:Enterprise 8.0 in the "client-server" version(one of the most important articles)
  11. Peculiarities Windows settings Server 2003 when installing 1C:Enterprise 8.0 server
  12. Regulating user access to the information base in the client-server version(one of the most important articles)
  13. Server 1C:Enterprise and SQL server
  14. Detailed installation procedure for 1C:Enterprise 8.0 in the "client-server" version(one of the most important articles)
  15. Using the built-in language on the 1C:Enterprise server

But when reading the documentation, be critical of the information received, for example, the article “Issues of installing and configuring 1C: Enterprise 8.0 in the client-server version” does not accurately describe the rights that are required for the user USER1CV8SERVER. There will be links to the list below, for example [ITS1] means the article “Features of applications working with the 1C:Enterprise server”. All links to articles are given to the latest issue of ITS at the time of writing (January 2006)

Use authorization capabilities combined with Windows authorization for users

Of the two possible user authorization modes: built-in 1C and combined with Windows OS authorization, if possible, you should choose combined authorization. This will allow users not to be confused with multiple passwords when working, but will not reduce the level of system security. However, even for users who use only Windows authorization, it is highly advisable to set a password when creating it, and only after that disable 1C authorization for this user. To ensure system recovery in the event of destruction of the Active Directory structure, it is necessary to leave at least one user who can log in to the system using 1C authorization.

When creating application solution roles, do not add rights “in reserve”

Each application solution role must reflect the minimum required set of rights to perform the actions defined by this role. However, some roles may not be used independently. For example, for interactive launch external treatments You can create a separate role and add it to all users who need to use external processing.

Regularly review logs and system operation protocols

If possible, regulate and automate the viewing of logs and system operation protocols. With proper configuration and regular review of logs (filtering only by important events), unauthorized actions can be detected early or even prevented during the preparation phase.

Some features of the client-server version

This section describes some of the operating features of the client-server option and their impact on security. For greater ease of reading, the following notations are used:

Attention! vulnerability description

Storing information that controls access to the system

Storing a list of information security users

All information about the list of users of this information security and the roles available to them in it is stored in the Params table in the MS SQL database (see [ITS2]). Looking at the structure and contents of this table, it becomes obvious that all user information is stored in a record with the FileName field value “users.usr”.

Since we assume that users do not have access to the MS SQL database, this fact in itself cannot be used by an attacker, however, if it is possible to execute code in MS SQL, this “opens the door” to obtaining any(!) access from 1C . The same mechanism (with minor changes) can also be used in the file version of the system, which, taking into account the features of the file version, completely excludes its applicability in building secure systems.

Recommendation: At the moment, there is no way to completely protect the application from such changes, except for the use of triggers at the MS SQL Server level, which, on the other hand, can cause problems when updating the platform version or changing the list of users. To track such changes, you can use the 1C log (paying attention to “suspicious” logins in the configurator mode without specifying the user) or keep SQL Profiler constantly running (which will have an extremely negative impact on system performance) or configure the Alerts mechanism (most likely together using triggers)

Storing information about the IS list on the server

For each 1C application server, information is stored about the list of MS SQL databases connected to it. Each infobase uses its own connection string from the application server and MS SQL server to operate. Information about infobases registered on the application server, along with connection strings, is stored in the file srvrib.lst, which is located on the server in the directory<Общие данные приложений>/1C/1Cv8 (for example, C:/Documents and Settings/All Users/Application Data/1C/1Cv8/srvrib.lst). For each information security system, a complete connection string is stored, including the MS SQL user password when using a mixed MS SQL authorization model. It is the presence of this file that makes it possible to fear unauthorized access to the MS SQL database, and if, contrary to recommendations, a privileged user (for example, “sa”) is used to access at least one database, then in addition to the threat to one information security, there is a threat to the entire system using MS SQL.

It is interesting to note that the use of mixed authorization and Windows authorization on an MS SQL server leads to different types of problems when gaining access to a given file. So the key negative properties of Windows authorization will be:

  • Operation of all information security on the application server and on the MS SQL server under one set of rights (most likely redundant)
  • From the 1C application server process (or in general from the user USER1CV8SERVER or its equivalent) without specifying a password, you can easily connect to any information security without specifying a password

On the other hand, it may be more difficult for an attacker to execute arbitrary code from the context of the user USER1CV8SERVER than to obtain the specified file. By the way, the presence of such a file is another argument for distributing server functions on different computers.

Recommendation: The srvrib.lst file should only be accessible by the server process. Be sure to configure auditing to change this file.

Unfortunately, by default this file is almost not protected from reading, which must be taken into account when deploying the system. The ideal option would be for the application server to prevent reading and writing of this file while running (including reading and writing by user connections running on this server).

Lack of authorization when creating information security on the server

Attention! The lack of authorization error was fixed in release 8.0.14 of the 1C:Enterprise platform. In this release, the concept of “1C:Enterprise Server Administrator” appeared, but as long as the list of administrators is specified on the server, the system operates as described below, so do not forget about this possible feature.

Probably the greatest vulnerability from this section is the ability to almost unlimitedly add information security to the application server, as a result of which any user who gains access to a connection to the application server automatically gets the ability to run arbitrary code on the application server. Let's look at this with an example.

The system must be installed as follows

  • MS SQL Server 2000 (for example, network name SRV1)
  • Server 1C:Enterprise 8.0 (network name SRV2)
  • Client part 1C:Enterprise 8.0 (network name WS)

It is assumed that the user (hereinafter referred to as USER) working on the WS has at least minimal access to one of the information security systems registered on SRV2, but does not have privileged access to SRV1 and SRV2. In general, the combination of functions of the listed computers does not affect the situation. The system was configured taking into account the recommendations in the documentation and on the ITS disks. The situation is shown in Fig. 2.


  • configure COM+ security on the application server so that only 1C users have the right to connect to the application server process (more details [ITS12]);
  • the srvrib.lst file must be read-only for the user USER1CV8SERVER (to add a new information security to the server, temporarily allow writing);
  • To connect to MS SQL, use only the TCP/IP protocol, in this case you can:
    • restrict connections using a firewall;
    • configure the use of a non-standard TCP port, which will complicate the connection of “outsiders” IB 1C;
    • use encryption of transmitted data between the application server and the SQL server;
  • configure the server firewall so that the use of third-party MS SQL servers is impossible;
  • use intranet security tools to exclude the possibility of an unauthorized computer appearing on the local network (IPSec, group security policies, firewalls, etc.);
  • Do not under any circumstances grant the user USER1CV8SERVER administrative rights on the application server.

Using code running on the server

When using the client-server version of 1C, the developer can distribute code execution between the client and application server. In order for the code (procedure or function) to be executed only on the server, it is necessary to place it in a general module for which the “Server” property is set and, in the case when module execution is allowed not only on the server, place the code in the restricted section “#If Server ":

#If Server Then
Function OnServer(Param1, Param2 = 0) Export // This function, despite its simplicity, is executed on the server
Param1 = Param1 + 12;
Return Param1;
EndFunction
#EndIf

When using code that runs on the server, you must take into account that:

  • the code runs with USER1CV8SERVER rights on the application server (COM objects and server files are available);
  • all user sessions are executed by one instance of the service, so, for example, a stack overflow on the server will cause all active users to disconnect;
  • debugging server modules is difficult (for example, you cannot set a breakpoint in the debugger), but must be done;
  • transferring control from the client to the application server and back can require significant resources with large volumes of transferred parameters;
  • use of interactive tools (forms, spreadsheet documents, dialog boxes), external reports and processing in code on the application server is impossible;
  • the use of global variables (application module variables declared with the indication "Export") is not allowed;

For more details, see [ITS15] and other ITS articles.

The application server must have special reliability requirements. In a properly built client-server system, the following conditions must be met:

  • no actions of the client application should interrupt the operation of the server (except for administrative cases);
  • the server cannot execute program code received from the client;
  • resources must be distributed “fairly” across client connections, ensuring server availability regardless of the current load;
  • in the absence of data blocking, client connections should not affect each other’s work;
  • not on the server user interface, but monitoring and logging tools must be developed;

In general, the 1C system is built in such a way as to come closer to these requirements (for example, it is impossible to force external processing to be performed on the server), but several unpleasant features still exist, therefore:

Recommendation: When developing a runtime server, it is recommended to adhere to the principle of minimal interface. Those. the number of entries into server modules from the client application should be very limited, and the parameters should be strictly regulated. Recommendation: When receiving parameters of procedures and functions on the server, it is necessary to validate the parameters (check that the parameters correspond to the expected type and range of values). This is not done in standard solutions, but it is highly desirable to introduce mandatory validation in your own developments. Recommendation: When generating request text (and especially the Run command parameter) on the server side, do not use strings received from the client application.

A general recommendation would be to familiarize yourself with the principles of building secure web-database applications and work on similar principles. The similarities are indeed considerable: firstly, like a web application, the application server is an intermediate layer between the database and the user interface (the main difference is that the web server forms the user interface); secondly, from a security point of view, you cannot trust the data received from the client, because it is possible to launch external reports and processing.

Passing parameters

Passing parameters to a function (procedure) executed on the server is a rather delicate issue. This is primarily due to the need to transfer them between the application server and client processes. When control passes from the client side to the server side, all transmitted parameters are serialized, transferred to the server, where they are “unpacked” and used. When moving from the server side to the client side, the process is reversed. It should be noted here that this scheme correctly handles passing parameters by reference and by value. When passing parameters, the following restrictions apply:

  • Only nonmutable values ​​(i.e., whose values ​​cannot be changed) can be transferred between the client and the server (in both directions): primitive types, references, universal collections, system enumeration values, value storage. If you try to pass something else, the client application crashes (even if the server tries to pass an incorrect parameter).
  • It is not recommended to transfer large amounts of data when passing parameters (for example, strings of more than 1 million characters), this may negatively affect server performance.
  • You cannot pass parameters containing a cyclic reference, both from the server to the client and back. If you try to pass such a parameter, the client application crashes (even if the server tries to pass the incorrect parameter).
  • It is not recommended to transfer very complex data collections. When you try to pass a parameter with a very large nesting level, the server crashes (! ).

Attention! The most annoying feature at the moment is probably the error in passing complex collections of values. So, for example, the code: Nesting Level = 1250;
M = New Array;
PassedParameter = M;
For Account = 1 By Level of Nesting Cycle
MVInt = New Array;
M.Add(MVInt);
M = MVint;
EndCycle;
ServerFunction(PassedParameter);

Leads to an emergency stop of the server with the disconnection of all users, and this occurs before control is transferred to the code in the built-in language.

Using unsafe functions on the server side.

Not all built-in language tools can be used in code executed on the application server, but even among the available tools there are many “problematic” constructs that can be roughly classified as follows:

  • capable of providing the ability to execute code not contained in the configuration ("Code Execution" group)
  • capable of providing the client application with information about file and operating system user or perform actions not related to working with data (“Violation of Rights”)
  • capable of causing a server crash or using very large resources ("Server crash" group)
  • capable of causing a client failure (client failure group) – this type is not considered. Example: passing a mutable value to the server.
  • errors in programming algorithms (endless loops, unlimited recursion, etc.) (“Programming errors”)

The main problematic designs known to me (with examples) are listed below:

Procedure Execute(<Строка>)

Executing code. Allows you to execute a piece of code that is passed to it as a string value. When used on the server, you must ensure that data received from the client is not used as a parameter. For example, the following usage is not allowed:

#If Server Then
ProcedureOnServer(Param1) Export
Execute(Param1);
End of Procedure
#EndIf

Type "COMObject" (constructor New COMObject(<Имя>, <Имя сервера>))

Creates an external application COM object with USER1CV8SERVER rights on the application server (or other specified computer). When used on a server, make sure that parameters are not passed from the client application. However, on the server side it is effective to use this feature when importing/exporting, sending data over the Internet, implementing non-standard functions, etc.

Function GetCOMObject(<Имя файла>, <Имя класса COM>)
Rights violation and code execution. Similar to the previous one, only getting the COM object corresponding to the file.
Procedures and functions ComputerName(), TemporaryFileDirectory(), ProgramDirectory(), WindowsUsers()
Rights violation. By executing them on the server, they allow you to find out the details of the organization of the server subsystem. When used on a server, make sure that the data is either not transferred to the client or is not accessible to operators without appropriate permission. Pay special attention to the fact that data can be passed back in a parameter passed by reference.
Procedures and functions for working with files (CopyFile, FindFiles, MergeFiles and many others), as well as File types.

Rights violation. They allow, by executing them on the server, to gain shared access to local (and located on the network) files accessible under the user rights USER1CV8SERVER. If used consciously, then it is possible to effectively implement tasks such as importing/exporting data on the server.

Be sure to check your 1C user rights before using these functions. To check user rights, you can use the following construct in the server module:

#If Server Then
Procedure PerformWorkWithFile() Export
RoleAdministrator = Metadata.Roles.Administrator;
User = SessionParameters.CurrentUser;
If User.Roles.Contains(RoleAdministrator) Then
//The code for working with files is executed here
endIf;
#EndIf

Be sure to check the parameters if you use these procedures and functions, otherwise there is a risk of accidentally or knowingly causing irreparable harm to the 1C application server, for example, when executing the following code on the server:

Path = "C:\Documents and Settings\All Users\Application Data\1C\1Cv8\";
MoveFile(Path + "srvrib.lst", Path + "Here'sWhereTheFileGoes");

After executing such code on the server, if the user USER1CV8SERVER has the rights to change it, as described above, and after restarting the server process (by default, 3 minutes after all users exit), a BIG question will arise about starting the server. But it is also possible to completely delete files...

Types "XBase", "BinaryData", "XML Reader", "XML Writer", "XSL Transformation", "ZipFile Writer", "ZipFile Reader", "Text Reader", "Text Writer"
Rights violation. They allow, by executing them on the server, access to local (and located on the network) files of certain types and read/write them under the user rights USER1CV8SERVER. If used consciously, it is possible to effectively implement such tasks as importing/exporting data on the server, logging the operation of certain functions, and solving administrative tasks. In general, the recommendations coincide with the previous paragraph, but you should consider the possibility of transferring data from these files (but not objects of all these types) between the client and server parts.
Type "SystemInformation"
Rights violation. Allows you to obtain data about the application server in case of incorrect use and transfer of data to the client part of the application. It is advisable to limit the right to use when using.
Types "InternetConnection", "InternetMail", "InternetProxy", "HTTPConnection", "FTPConnection"

Rights violation. When used on a server, it connects to a remote PC from an application server under the rights USER1CV8SERVER. Recommendations:

  • Control of parameters when calling methods.
  • Control of 1C user rights.
  • Severe restrictions on the rights of the user USER1CV8SERVER to access the network.
  • Correctly setting up the firewall on the 1C application server.

When used correctly, it is convenient to organize, for example, sending emails from an application server.

Types "InformationBaseUserManager", "InformationBaseUser"

Rights violation. If used incorrectly (in a privileged module), it is possible to add users or change the authorization parameters of existing users.

Function Format

Server crash. Yes! This seemingly harmless function, if its parameters are not controlled and executed on the server, can cause the server application to crash. The error occurs when formatting numbers and using the mode for displaying leading zeros and a large number of characters, for example

Format(1, "CHZ=999; CHVN=");

I hope this error will be corrected in the next platform releases, but in the meantime, in all calls to this function that can be executed on the server, check the call parameters.

Procedures and functions for saving values ​​(ValueInRowInt, ValueInFile)
Server crash. These functions do not handle circular references in collections or very deep nesting, so they may crash in some very special cases.

Errors in boundary and special parameter values ​​in functions. Execution control.

One of the problems that you may encounter when using a server is the high "responsibility" of server functions (the possibility of the entire server application crashing due to an error in one connection and the use of one "resource space" for all connections). Hence the need to control the main runtime parameters:

  • For built-in language functions, check their launch parameters (a good example is the “Format” function)
  • When using loops, make sure that the loop exit condition is satisfied. If the loop is potentially infinite, artificially limit the number of iterations: MaximumIterationCounterValue = 1000000;
    Iteration Counter = 1;
    Bye
    FunctionWhichMayNotReturnFalseValue()
    AND (Iteration Count<МаксимальноеЗначениеСчетчикаИтераций) Цикл

    //.... Body of the loop
    Iteration Counter = Iteration Counter + 1;
    EndCycle;
    If Iteration Counter>MaximumValue Of Iteration Counter Then
    //.... handle the event of an excessively long loop execution
    endIf;

  • When using recursion, limit the maximum nesting level.
  • When forming and executing queries, try to prevent very long selections and selections of a large amount of information (for example, when using the "IN HIERARCHY" condition, do not use an empty value)
  • When designing an information base, provide a sufficiently large reserve of bit depth for numbers (otherwise addition and multiplication become non-commutative and non-associative, which makes debugging difficult)
  • In executable queries, check the operation logic for the presence of NULL values ​​and the correct operation of query conditions and expressions using NULL.
  • When using collections, control the ability to transfer them between the application server and the client side.

Using terminal access to the client side to restrict access

You can often find recommendations to use terminal access to limit access to data and improve performance by executing client-side code on the terminal server. Yes, if configured correctly, the use of terminal access can indeed increase the overall level of system security, but, unfortunately, you can often encounter the fact that with practical use, the security of the system only decreases. Let's try to figure out what this is connected with. Now there are two common means of organizing terminal access, these are Microsoft Terminal Services (RDP protocol) and Citrix Metaframe Server (ICA protocol). In general, Citrix tools provide much more flexible access administration options, but the price of these solutions is much higher. We will consider only the basic features common to both protocols that can reduce the overall level of security. There are only three main dangers when using terminal access:
  • Ability to block the work of other users by seizing excessive amounts of resources
  • Access to other users' data.
  • Unauthorized copying of data from the terminal server to the user’s computer

In any case, Terminal Services allows you to:

  • Increase the reliability of work (if there is a failure on the terminal computer, the user can subsequently continue working from the same place)
  • Restrict access to the client application and files it saves.
  • Transfer the computing load from the user's workstation to the terminal access server
  • Manage system settings more centrally. For users, the saved settings will be valid regardless of which computer they logged into the system from.
  • In some cases, you can use a terminal solution for remote access to the system.

It is necessary to limit the number of possible connections to the terminal server for one user

Due to the "gluttony" of the 1C client application regarding resources, it is imperative to limit the maximum number of simultaneous connections of one user (operator) to the terminal server. An actively used connection can use up to 300 MB of memory with just one instance of the application. In addition to memory, CPU time is actively used, which also does not contribute to the stability of the users of this server. At the same time as preventing excessive use of server resources, such a restriction can prevent the use of someone else's account. Implemented by standard terminal server settings.

You should not allow more than one or two 1C client applications to run simultaneously in one connection

Dictated by the same reasons as in the previous paragraph, but technically more difficult to implement. The problem is that it is almost impossible to prevent 1C from being restarted using terminal server tools (why will be explained below), so you have to implement this feature at the level of the application solution (which is also not a good solution, since sessions may remain “hanging” for some time If the application is terminated incorrectly, there is a need to refine the application solution in the application module and some reference books, which will complicate the use of updates from 1C). It is highly desirable to leave the user the ability to run 2 applications in order to be able to run some actions (for example, generating reports) in the background - the client application, unfortunately, is actually single-threaded.

It is not recommended to give access rights to the terminal server to users who have the right to run resource-intensive computing tasks in 1C or to prevent such launching while other users are actively working.

Of course, it is better to leave access to the terminal server only to users who do not use tasks such as data mining, geographic diagrams, import/export, and other tasks that seriously load the client part of the application. If there is still a need to allow such tasks, then it is necessary to: notify the user that these tasks can affect the performance of other users, record the start and end of such a process in the log, allow execution only at a regulated time, etc.

It is necessary to make sure that each user has write rights only to strictly defined directories on the terminal server and that other users do not have access to them.

Firstly, if you do not limit the ability to write to shared directories (such as the directory where 1C is installed), then it remains possible for an attacker to change the behavior of the program for all users. Secondly, the data of one user (temporary files, files for saving report settings, etc.) should under no circumstances be accessible to another user of the terminal server - in general, during normal configuration, this rule is followed. Thirdly, the attacker still has the opportunity to “litter” the partition so that there is no space left on the hard drive. I know they will object to me that the Windows operating system, starting with Windows 2000, has a quota mechanism, but this is a rather expensive mechanism, and I have practically never seen any real use of it.

If the previous issues of setting up access were generally quite easy to implement, then such a (seemingly) simple task as regulating user access to files is not trivially implemented. Firstly, if the quota mechanism is not used, then large files can be saved. Secondly, the system is built in such a way that it will almost always be possible to save the file so that it is available to another user.

Considering that the task is difficult to completely solve, it is recommended to audit most file events

It is necessary to prohibit the connection (mapping) of disk devices, printers and the clipboard of the client workstation.

In RDP and ICA, it is possible to organize automatic connection of disks, printers, clipboard com ports of the terminal computer to the server. If this opportunity exists, then it is almost impossible to prevent the launch of foreign code on the terminal server and the saving of data from 1C on the terminal access client. Allow these features only for those with administrative rights.

Network file access from the terminal server should be limited.

If this is not done, the user will again be able to run unwanted code or save data. Since the regular log does not track file events (by the way, a good idea for implementation by platform developers), and it is almost impossible to set up a system audit throughout the entire network (there are not enough resources to maintain it), it is better that the user can send data either to print, or by email. Pay special attention to ensuring that the terminal server does not work directly with users’ removable media.

Under no circumstances should you leave the application server on the terminal server when creating a secure system.

If the application server runs on the same computer as client applications, then there are many opportunities to disrupt its normal operation. If for some reason it is impossible to separate the functions of the terminal server and application server, then pay special attention to user access to files used by the application server.

It is necessary to exclude the possibility of running all applications except 1C:Enterprise on the terminal server.

This is one of the most difficult wishes to implement. Let's start with the fact that you need to correctly configure the Group Security Policy policy in the domain. All Administrative Templates and Software Restriction Policies must be configured correctly. To test yourself, make sure that at least the following features are blocked:

The complexity of implementing this requirement often leads to the possibility of launching an “extra” 1C session on the terminal server (even if other applications are limited, it is fundamentally impossible to prohibit the launch of 1C using Windows).

Consider the limitations of the regular log (all users use the program from one computer)

Obviously, since users open 1C in terminal mode, then the terminal server will be recorded in the log. The log does not indicate which computer the user connected from.

Terminal Server – Protection or Vulnerability?

So, after considering the main features of the terminal north, we can say that the terminal north can potentially help in automation to distribute the computing load, but building a secure system is quite difficult. One of the cases when using a terminal server is most effective is when running 1C without Windows Explorer in full screen mode for users with limited functionality and a specialized interface.

Work of the client part

Using Internet Explorer (IE)

One of the conditions for normal operation of the 1C client part is the use of components Internet Explorer. You need to be very careful with these components.

Attention! Firstly, if a spyware or adware module is “attached” to IE, then it will load even if you view any HTML files in 1C. So far I have not seen any conscious use of this feature, but I have seen in one of the organizations a loaded “spy” module from one of the pornographic networks with 1C running (the antivirus program was not updated, symptoms for which were detected: when setting up the firewall, it was clear that 1C was trying on port 80 connect to a porn site). Actually, this is another argument in favor of the fact that protection should be comprehensive

Attention! Secondly, the 1C system allows the use of Flash movies, ActiveX objects, VBScript in displayed HTML documents, sending data to the Internet, even opening PDF files (!), although in the latter case it asks “open or save”... In in general, everything your heart desires. An example of a not entirely reasonable use of the built-in HTML viewing and editing capabilities:

  • Create a new HTML document (File -> New -> HTML Document).
  • Go to the "Text" tab of the blank document.
  • Remove the text (entirely).
  • Go to the "View" tab of this document
  • Using drag-n-drop, move a file with a SWF extension (these are Flash movie files) from an open Explorer to a document window, for example from the browser cache, although you can also use a FLASH toy for fun.
  • How lovely! You can run a toy on 1C!

From a system security point of view, this is completely wrong. So far I have not seen any special attacks on 1C through this vulnerability, but most likely it will be a matter of time and the value of your information.

There are some other minor issues that arise when working with an HTML document field, but the main ones are the two listed. Although, if you approach these features creatively, you can organize truly amazing interface capabilities for working with 1C.

Using external reports and processing.

Attention! External reports and processing - on the one hand – convenient way implementation of additional printed forms, regulatory reporting, specialized reports, on the other hand, a potential way to bypass many system security restrictions and disrupt the operation of the application server (for an example, see above in “Passing parameters”). In the 1C system there is a special parameter in the set of rights for the role “Interactive opening of external processing”, but this does not completely solve the problem - for a complete solution it is necessary to narrow the circle of users who can manage external printed forms, regulatory reports and other standard capabilities of standard solutions implemented using external treatments. For example, by default in UPP, all main user roles have the ability to work with a directory of additional printed forms, and this, in fact, is the ability to use any external processing.

Using standard mechanisms for standard solutions and platforms (data exchange)

Some of the standard mechanisms are potentially dangerous, and in unexpected ways.

Printing lists

Any list (for example, a directory or information register) in the system can be printed or saved to a file. To do this, just use the standard feature available from the context menu and the “Actions” menu:

Keep in mind that virtually everything that the user sees in the lists can be output to external files. The only thing we can advise is to keep a log of document printing on print servers. For particularly critical forms, it is necessary to configure the action panel associated with the protected table field so that the ability to display a list is not available from this panel, and disable the context menu (see Figure 6).

Data exchange in a distributed database

The data exchange format is quite simple and is described in the documentation. If the user has the ability to replace several files, he can make unauthorized changes to the system (although this is quite a labor-intensive task). The ability to create a peripheral database when using distributed database exchange plans should not be available to ordinary operators.

Standard XML Data Interchange

In standard data exchange, which is used for exchange between standard configurations (for example, “Trade Management” and “Enterprise Accounting”), it is possible to specify event handlers for loading and unloading objects in the exchange rules. This is implemented by obtaining a handler from the file and the “Run()” procedure for standard processing of file loading and unloading (the “Run()” procedure is launched on the client side). Obviously, it is not difficult to create such a fake exchange file that will perform malicious actions. For most user roles of standard solutions, sharing is allowed by default.

Recommendation: restrict access to XML exchange for most users (leaving it only to information security administrators). Keep logs of the runs of this processing, saving the exchange file, for example, sending it by email information security administrator before downloading.

Using generic reports, especially the Reports Console

Another issue is default user access to generic reports, especially the Report Console report. This report is characterized by the fact that it allows you to execute almost any requests to information security, and, even if the 1C rights system (including RLS) is configured quite strictly, it allows the user to obtain a lot of “extra” information and force the server to execute a request that will consume all resources systems.

Using Full Screen Mode (Desktop Mode)

One of the effective ways to organize specialized interfaces with limited access to the program functionality is the full-screen mode of the main (and possibly only) form of the interface used. In this case, there are no accessibility issues, for example, the "File" menu and all user actions are limited by the capabilities of the form used. For more details, see "Features of implementing desktop mode" on the ITS disk.

Backup

Backup for the client-server version of 1C can be performed in two ways: uploading data to a file with the dt extension and creating backup copies using SQL. The first method has many disadvantages: exclusive access is required, the creation of a copy itself takes much longer, in some cases (if the information security structure is violated) creating an archive is impossible, but there is one advantage - the minimum size of the archive. For SQL backup, the opposite is true: the creation of a copy occurs in the background using the SQL server, due to the simple structure and lack of compression - this is a very fast process, and as long as the physical integrity of the SQL database is not broken, the backup is performed, but the size of the copy coincides with the true one the size of the information security in the expanded state (compression is not performed). Due to the additional advantages of the MS SQL backup system, it is more advisable to use it (3 types of backups are allowed: full, differential, transaction log copy; it is possible to create regularly executed jobs; a backup copy and a backup system are quickly deployed; it is possible to predict the size of the required disk space, etc.). The main points of organizing a backup from a system security point of view are:

  • The need to choose a storage location for backups so that they are not accessible to users.
  • The need to store backups at a physical distance from the MS SQL server (in case of natural disasters, fires, attacks, etc.)
  • The ability to give rights to start a backup to a user who does not have access to backups.

For more details, please refer to the MS SQL documentation.

Data encryption

To protect data from unauthorized access, various cryptographic tools (both software and hardware) are often used, but their feasibility largely depends on the correct application and overall security of the system. We will look at data encryption at various stages of data transmission and storage using the most common means and the main errors in system design using cryptographic tools.

There are several main stages of information processing that can be protected:

  • Data transfer between the client part of the system and the application server
  • Transferring data between the application server and MS SQL Server
  • Data stored on MS SQL Server (data files on physical disk)
  • Encryption of data stored in information security
  • External data (in relation to information security)

For data stored on the client side and on the application server (saved user settings, list of information security, etc.), encryption is justified only in very rare cases and therefore is not considered here. When using cryptographic tools, we must not forget that their use can significantly reduce the performance of the system as a whole.

General information about cryptographic protection of network connections when using the TCP/IP protocol.

Without security, all network connections are vulnerable to unauthorized surveillance and access. To protect them, you can use data encryption at the network protocol level. To encrypt data transmitted on a local network, IPSec tools provided by the operating system are most often used.

IPSec tools provide encryption of transmitted data using DES and 3DES algorithms, as well as integrity verification using MD5 or SHA1 hash functions. IPSec can operate in two modes: transport mode and tunnel mode. Transport mode is better suited for securing connections on a local network. Tunnel mode can be used to organize VPN connections between separate network segments or protect a remote connection to a local network over open data channels.

The main advantages of this approach are:

  • Possibility of centralized security management using Active Directory tools.
  • The ability to exclude unauthorized connections to the application server and MS SQL server (for example, it is possible to protect against unauthorized addition of information security on the application server).
  • Elimination of "listening" of network traffic.
  • There is no need to change the behavior of application programs (in this case 1C).
  • The standard nature of such a solution.

However, this approach has limitations and disadvantages:

  • IPSec does not protect data from interference and eavesdropping directly on the source and destination computers.
  • The amount of data transferred over the network is slightly larger than without using IPSec.
  • When using IPSec, the load on the central processor is slightly higher.

A detailed description of the implementation of IPSec tools is beyond the scope of this article and requires an understanding of the basic principles of the functioning of the IP protocol. To properly configure connection security, please read the relevant documentation.

Separately, it is necessary to mention several aspects of the license agreement with 1C when organizing VPN connections. The fact is that, despite the absence of technical restrictions, when connecting several segments of a local network or remote access of an individual computer to a local network, several basic supplies are usually required.

Encryption of data when transferred between the client part of the system and the application server.

In addition to encryption at the network protocol level, it is possible to encrypt data at the COM+ protocol level, which is mentioned in the article “Regulating user access to the information base in the client-server version” of ITS. To implement it, you need to set the Authentication level for calls to “Packet Privacy” for the 1CV8 application in “Component Services”. When set to this mode, the packet is authenticated and encrypted, including the data and the identity and signature of the sender.

Encryption of data when transferred between the application server and MS SQL Server

MS SQL Server provides the following tools for data encryption:

  • It is possible to use Secure Sockets Layer (SSL) when transferring data between the application server and MS SQL Server.
  • When using the Multiprotocol network library, data encryption is used at the RPC level. This is potentially weaker encryption than using SSL.
  • If the Shared Memory exchange protocol is used (this happens if the application server and MS SQL Server are located on the same computer), then encryption is not used in any case.

In order to establish the need to encrypt all transmitted data for a specific MS SQL server, you must use the "Server Network Utility" utility. Run it and on the "General" tab, check the "Force protocol encryption" checkbox. The encryption method is selected depending on the one used by the client application (i.e., the 1C application server). To use SSL, you must correctly configure the certificate service on your network.

In order to set the need to encrypt all transmitted data for a specific application server, you must use the "Client Network Utility" utility (usually located in "C:\WINNT\system32\cliconfg.exe"). As in the previous case, on the "General" tab, check the "Force protocol encryption" checkbox.

It is worth considering that the use of encryption in this case can have a significant impact on system performance, especially when using queries that return large amounts of information.

In order to more fully protect the connection between the application server and MS SQL Server when using the TCP/IP protocol, we can recommend several changes to the default settings.

Firstly, you can set a port other than the standard one (port 1433 is used by default). If you decide to use a non-standard TCP port for data exchange, please note that:

  • The MS SQL server and the application server must use the same port.
  • When using firewalls, this port must be allowed.
  • You cannot set a port that can be used by other applications on the MS SQL server. For reference, you can use http://www.ise.edu/in-notes/iana/assignments/port-numbers (address taken from SQL Server Books Online).
  • When using multiple instances of the MS SQL Server service, be sure to read the MS SQL documentation for configuration (section "Configuring Network Connections").

Secondly, in the TCP/IP protocol settings on the MS SQL server, you can set the "Hide server" flag, which prohibits responses to broadcast requests for this instance of the MS SQL Server service.

Encryption of MS SQL data stored on disk

There is a fairly large selection of software and hardware for encrypting data located on a local disk (this includes the standard Windows ability to use EFS, the use of eToken keys, and third-party programs such as Jetico Bestcrypt or PGPDisk). One of the main tasks performed by these tools is to protect data in the event of media loss (for example, when a server is stolen). It is especially worth noting that Microsoft does not recommend storing MS SQL databases on encrypted media, and this is quite justified. The main problem in this case is a significant drop in productivity and possible problems reliability from failures. The second factor complicating the life of the system administrator is the need to ensure the availability of all database files at the time the MS SQL service first accesses them (i.e., it is desirable that interactive actions are excluded when connecting an encrypted medium).

In order to avoid a noticeable drop in system performance, you can use the ability of MS SQL to create databases in several files. Of course, in this case, the MS SQL database should not be created by the 1C server when creating the infobase, but should be created separately. An example of a TSQL script with comments is given below:

USE master
GO
-- Create a database SomeData,
CREATE DATABASE SomeData
-- the data of which is entirely located in the filegroup PRIMARY.
ON PRIMARY
-- The main data file is located on encrypted media (logical drive E:)
-- and has an initial size of 100 MB, can be automatically increased to 200 MB with
-- in 20 MB increments
(NAME = SomeData1,
FILENAME = "E:\SomeData1.mdf",
SIZE = 100MB,
MAXSIZE = 200,
FILEGROWTH = 2),
-- The second data file is located on unencrypted media (logical drive C:)
-- and has an initial size of 100 MB, can be automatically increased to the limit
-- disk space in increments of 5% of the current file size (rounded up to 64 KB)
(NAME = SomeData2,
FILENAME = "c:\program files\microsoft sql server\mssql\data\SomeData2.ndf",
SIZE = 100MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 5%)
LOG ON
-- Although the transaction log could also be divided into parts, this should not be done,
-- because this file changes much more frequently and is cleaned regularly (for example, when
-- creating a database backup).
(NAME = SomeDatalog,
FILENAME = "c:\program files\microsoft sql server\mssql\data\SomeData.ldf",
SIZE = 10MB,
MAXSIZE = UNLIMITED,
FILEGROWTH = 10)
GO
-- It is better to immediately give ownership of the database to the user on whose behalf
-- 1C will connect. To do this, we need to declare the current base
- just created,
USE SomeData
GO
-- and execute sp_changedbowner
EXEC sp_changedbowner @loginame = "SomeData_dbowner"

A short digression about the automatic growth of the data file size. By default, file sizes for new databases are increased in increments of 10% of the current file size. This is a completely acceptable solution for small databases, but not very good for large ones: with a database size of, for example, 20 GB, the file should immediately increase by 2 GB. Although this event will occur quite rarely, it can last several tens of seconds (all other transactions are effectively idle during this time), which, if it occurs during active work with the database, can cause some failures. The second negative consequence of proportional increment, which occurs when disk space is almost completely full, is the likelihood of premature failure due to insufficient free space. For example, if a disk partition with a capacity of 40 GB is completely dedicated to one database (more precisely, to one file of this database), then the critical size of the database file at which it is necessary to urgently (very urgently, to the point of interrupting the normal work of users) to reorganize the storage of information is data file size 35 GB. With the increment size set at 10-20 MB, you can continue working until you reach 39 GB.

Therefore, although the above listing specifies an increase in the size of one of the database files in increments of 5%, for large database sizes it is better to set a fixed increment of 10-20 MB. When setting the increment values ​​for database file size growth, it is necessary to take into account that until one of the files in a file group reaches its maximum size, the rule applies: files in one file group are increased all at the same time, when they are all completely filled. So in the example above, when the SomeData1.mdf file reaches its maximum size of 200 MB, the SomeData2.ndf file will be about 1.1 GB in size.

After creating such a database, even if its unprotected files SomeData2.ndf and SomeData.ldf become accessible to an attacker, it will be extremely difficult to restore the true state of the database - the data (including information about the logical structure of the database) will be scattered across several files, and key information (about, for example, which files make up this database) will be in the encrypted file.

Of course, if storing database files using cryptographic means is used, then backup (at least of these files) should not be carried out on unencrypted media. To back up individual database files, use the appropriate BACKUP DATABASE command syntax. Please note that although it is possible to protect a database backup with a password (the "PASSWORD = " and "MEDIAPASSWORD = " options of the "BACKUP DATABASE" command), such a backup does not become encrypted!

Encryption of application server and client data stored on disks

In most cases, it cannot be considered justified to store files used by 1C:Enterprise (client part and application server) on encrypted media due to unreasonably high costs, however, if such a need exists, please note that the application server and client part of the application very often create temporary files. Often, these files can remain after the application has finished running, and it is almost impossible to guarantee their removal using 1C tools. Thus, it becomes necessary to encrypt the directory used for temporary files in 1C or not store it on disk using a RAM drive (the latter option is not always possible due to the size of the generated files and the RAM requirements of the 1C:Enterprise application itself).

Data encryption using built-in 1C tools.

Standard capabilities for using encryption in 1C come down to using objects for working with Zip files with encryption parameters. The following encryption modes are available: AES algorithm with a key of 128, 192 or 256 bits and an obsolete algorithm originally used in the Zip archiver. Zip files encrypted with AES are not readable by many archivers (WinRAR, 7zip). To generate a file containing encrypted data, you must specify a password and encryption algorithm. The simplest example of encryption-decryption functions based on this feature is given below:

Function EncryptData(Data, Password, Encryption Method = Undefined) Export

// Write the data to a temporary file. In fact, not all data can be saved this way.
ValueInFile(TemporaryFileName, Data);

// Write temporary data to the archive
Zip = New ZipFileRecord(TemporaryArchiveFileName, Password, EncryptionMethod);
Zip.Add(TemporaryFileName);
Zip.Write();

// Read data from the received archive into RAM
EncryptedData = NewValueStorage(NewBinaryData(ArchiveTemporaryFileName));

// Temporary files - delete

EndFunctions Function DecryptData(EncryptedData, Password) Export

// Attention! The correctness of the passed parameters is not monitored

// Write the passed value to a file
ArchiveTemporaryFileName = GetTemporaryFileName("zip");
BinaryArchiveData = EncryptedData.Get();
BinaryArchiveData.Write(ArchiveTemporaryFileName);

// Extract the first file of the just written archive
TemporaryFileName = GetTemporaryFileName();
Zip = New ReadZipFile(TemporaryArchiveFileName, Password);
Zip.Extract(Zip.Items, TemporaryFileName, ZIPFilePathRecoveryMode.Do NotRecover);

// Read the written file
Data = ValueFromFile(TemporaryFileName + "\" + Zip.Items.Name);

//Delete temporary files
DeleteFiles(TemporaryFileName);
DeleteFiles(ArchiveTemporaryFileName);

Return Data;

EndFunction

Of course, this method cannot be called ideal - the data is written to a temporary folder in clear text, the performance of the method, frankly speaking, is worse than ever, storage in the database requires an extremely large amount of space, but this is the only method that is based only on the built-in mechanisms of the platform. In addition, it has an advantage over many other methods - this method simultaneously packs data along with encryption. If you want to implement encryption without the disadvantages that this method has, you must either implement them in an external component, or turn to existing libraries through the creation of COM objects, for example, using Microsoft CryptoAPI. As an example, we will give the functions of encrypting/decrypting a string based on the received password.

Function EncryptStringDES(UnencryptedString, Password)

CAPICOM_ENCRYPTION_ALGORITHM_DES = 2; // This constant is from CryptoAPI


EncryptionMechanism.Content = UnencryptedString;
Encryption Engine.Algorithm.Name = CAPICOM_ENCRYPTION_ALGORITHM_DES;
EncryptedString = EncryptionMechanism.Encrypt();

return EncryptedString;

EndFunction // EncryptStringDES()

Function DecryptStringDES(EncryptedString, Password)

//Attention! Parameters are not checked!

Encryption Engine = New COMObject("CAPICOM.EncryptedData");
EncryptionMechanism.SetSecret(Password);
Attempt
EncryptionMechanism.Decrypt(EncryptedString);
Exception
// Incorrect password!;
Return Undefined;
EndAttempt;

ReturnEncryptionMechanism.Content;

EndFunction // DecryptStringDES()

Please note that when transferring empty value entering a string or password into these functions will generate an error message. The string obtained using this encryption procedure is slightly longer than the original. The specificity of this encryption is that if you encrypt a string twice, the resulting strings will NOT be identical.

Basic mistakes when using cryptographic tools.

When using cryptographic tools, the same mistakes are often made:

Underestimating the performance penalty when using cryptography.

Cryptography is a task that requires a fairly large number of calculations (especially for algorithms such as DES, 3DES, GOST, PGP). And even when using high-performance and optimized algorithms (RC5, RC6, AES), there is no escape from unnecessary data transfer in memory and computational processing. And this almost negates the capabilities of many server components (RAID arrays, network adapters). When using hardware encryption or hardware derivation of the encryption key, there is an additional possible performance bottleneck: the speed of data transfer between the additional device and memory (where the performance of such device may not be critical). When using encryption of small amounts of data (for example, an email message), the increase in the computing load on the system is not so noticeable, but in the case of total encryption of everything, this can greatly affect the performance of the system as a whole.

Underestimation of modern capabilities for selecting passwords and keys.

At the moment, the capabilities of technology are such that a key with a length of 40-48 bits can be selected by a small organization, and a key with a length of 56-64 bits by a large organization. Those. algorithms that use a key of at least 96 or 128 bits must be used. But most keys are generated using hash algorithms (SHA-1, etc.) based on passwords entered by the user. In this case, a key with a length of 1024 bits may not help. Firstly, an easy-to-guess password is often used. Factors that facilitate selection are: using only one case of letters; use of words, names and expressions in passwords; use of known dates, birthdays, etc.; using "patterns" when generating passwords (for example, 3 letters, then 2 numbers, then 3 letters throughout the organization). A good password should be a fairly random sequence of both case letters, numbers, and punctuation marks. Passwords entered from the keyboard up to 7-8 characters long, even if these rules are followed, can be guessed in a reasonable time, so it is better for the password to be at least 11-13 characters. The ideal solution is to avoid generating a key using a password, for example, using various smart cards, etc., but in this case it is necessary to provide for the possibility of protecting against loss of the encryption key media.

Insecure storage of keys and passwords.

Common examples of this error are:

  • long and complex passwords written on sticky notes glued to the user’s monitor.
  • storing all passwords in a file that is not protected (or is protected much weaker than the system itself)
  • storage of electronic keys in the public domain.
  • frequent transfer of electronic keys between users.

Why make an armored door if the key to it is under the doormat?

Transfer of initially encrypted data into an insecure environment.

When setting up a security system, make sure it does its job. For example, I came across a situation (not related to 1C) when an initially encrypted file, when the program was running in clear form, was placed in a temporary folder, from where it could be safely read. Often, backup copies of encrypted data in clear form are located somewhere “not far” from this data.

Use of cryptographic tools for other purposes

By encrypting data in transit, you cannot expect the data to be inaccessible where it is used. For example, IPSec services do not in any way prevent the application server from "sniffing" network traffic at the application level.

Thus, to avoid mistakes when implementing cryptographic systems, you should (at a minimum) do the following before deploying it.

  • Find out:
    • What needs to be protected?
    • Which protection method should you use?
    • Which parts of the system need to be secured?
    • Who will control access?
    • Will encryption work in all the right areas?
  • Determine where the information is stored, how it will be sent over the network, and the computers from which the information will be accessed. This will provide information about network speed, capacity, and usage before implementing the system, which is useful for optimizing performance.
  • Assess the system's vulnerability to various types of attacks.
  • Develop and document a system security plan.
  • Evaluate the economic efficiency (justification) of using the system.

Conclusion

Of course, in a quick review it is impossible to indicate all aspects related to security in 1C, but let us allow ourselves to draw some preliminary conclusions. Of course, this platform cannot be called ideal - it, like many others, has its own problems in organizing a secure system. But this in no way means that these problems cannot be circumvented; on the contrary, almost all shortcomings can be eliminated with proper development, implementation and use of the system. Most problems arise due to insufficient development of a specific application solution and its execution environment. For example, standard solutions without significant changes simply do not imply the creation of a sufficiently secure system.

This article once again demonstrates that any set of security measures must cover all stages of implementation: development, deployment, system administration and, of course, organizational measures. In information systems, it is the “human factor” (including users) that is the main security threat. This set of measures must be reasonable and balanced: it does not make sense and it is unlikely that sufficient funds will be allocated to organize protection that exceeds the cost of the data itself.

Company is a unique service for buyers, developers, dealers and affiliate partners. In addition, this is one of the best online software stores in Russia, Ukraine, and Kazakhstan, which offers customers a wide range of products, many payment methods, prompt (often instant) order processing, and tracking the order process in a personal section.