What is not a DDBMS

The following are not a DDBMS :


 A time sharing computer system.

 A loosely or tightly coupled multiprocessor system.

 A database which resides at one of the nodes of a network of computers – this is a centralized database on a network node.

Advantages and disadvantages of DDBMS


Advantages : 


 Reflects organizational structure 
 Improved sharability and local autonomy
 Improved availability Improved reliability 
 Improved performance
 Modular growth 
 Less danger on single-point failure

Disadvantages:


 Complexity 
 Cost
 Security
 Integrity control more difficult
 Lack of standards
 Lack of experience
 Database design more complex
 Possible slow response

STRING.CPP with overloaded operator +


#include <iostream.h>
#include <string.h>

class String {
   char *char_ptr;   // pointer to string contents
   int length;       // length of string in characters
public:
   // three different constructors
   String(char *text);           // constructor using existing string
   String(int size = 80);        // creates default empty string
   String(String& Other_String); // for assignment from another
                                 // object of this class
   ~String() {delete char_ptr;}; // inline destructor
   int Get_len (void);
   String operator+ (String& Arg);
   void Show (void);
};

Advantage of Inline Function

In hard real time systems, the most important factor is execution time of codes. So when you call functions especially in loops you loss a time for calling a function and return from it. To avoid this lose of time (no need for function call mechanism) put inline keyword in front of a function declaration as shown below: 

inline double cube( double x) 
{
    return x*y*z;
}

Conditional compilation


One of the most powerful features of the preprocessor is the so-called conditional compilation this  means that portions of the code could be excluded in the actual compilation under the certain conditions. 

This means that your source could contain special code for, say, the ARM processor. Using conditional  compilation, this code could be ignored when compiling for all other processors. 
The preprocessor directives #ifdef, #ifndef, #if, #elif, and #else are used to control the 
source code. 

The #ifdef (#ifndef) directive includes a section if a preprocessor symbol is defined 
(undefined). 

Const and data hiding


In C++ when a member function returns a pointer which points to it’s member variable, there exists a possibility of the pointer address or the pointer value getting modified. This problem can be overcome by using the const keyword. An example illustrating this idea is given below

class Person
{
public:
Person(char* szNewName)
{
// make a copy of the string
m_szName = _strdup(szNewName);
};
~Person() { delete[] m_szName; };
const char* const GetName() const
{
return m_szName;
};
private:
char* m_szName;
};

In the above class the GetName() member function returns a pointer to the member variable m szName. To prevent this member variable from getting accidently modified the GetName() has been prototyped to return a constant pointer pointing to a constant value. Also the const keyword at the end of the function prototype states that the function does not modify any of the
member variables.

Don’t ignore API function return values


Most API functions will return a particular value which represents an error. You should test for these values every time you call the API function. If you don’t want want to clutter your code with error-testing, then wrap the API call in another function (do this when you are thinking about portability, too) which tests the return value and either asserts, handles the problem, or throws an exception. The above example of Open Data File is a primitive way of wrapping fopen with error-checking code which throws an exception if fopen fails.

Redefinitions, Redclarations , Conflicting types


Consider what happens if a C source file includes both a.h and b.h, and also a.h includes b.h (which is perfectly sensible; b.h might define some types that a.h needs). Now, the C source file includes b.h twice. So every #define in b.h occurs twice, every declaration occurs twice (not actually a problem), every typedef occurs twice, etc. In theory, since they are exact duplicates it
shouldn’t matter, but in practice it is not valid C and you will probably get compiler errors or at least warnings.

The solution to this problem is to ensure that the body of each header file is included only once per source file. This is generally achieved using preprocessor directives. We will define a macro for each header file, as we enter the header file, and only use the body of the file if the macro is not already defined. In practice it is as simple as putting this at the start of each header file:

Multiply defined symbols


A header file is literally substituted into your C code in place of the #include statement. Consequently, if the header file is included in more than one source file all the definitions in the header file will occur in both source files. This causes them to be defined more than once, which gives a linker error.

Solution: don’t define variables in header files. You only want to declare them in the header file, and define them (once only) in the appropriate C source file, which should #include the header file of course for type checking. The distinction between a declaration and a definition is easy to miss for beginners; a declaration tells the compiler that the named symbol should exist and should have the specified type, but it does not cause the compiler to allocate storage space for it, while a definition does allocate the space. To make a declaration rather than a definition, put the keyword ‘extern’ before the definition.

So, if we have an integer called ‘counter’ which we want to be publiclyavailable, we would define it in a source file (one only) as ‘int counter;’ at top level, and declare it in a header file as ‘extern int counter;’.

Identifier clashes between source files


In C, variables and functions are by default public, so that any C source file may refer to global variables and functions from another C source file. This is true even if the file in question does not have a declaration or prototype for the variable or function. You must, therefore, ensure that the same symbol name is not used in two different files. If you don’t do this you will get
linker errors and possibly warnings during compilation. 

One way of doing this is to prefix public symbols with some string which depends on the source file they appear in. For example, all the routines in gfx.c might begin with the prefix ‘gfx ’. If you are careful with the way you split up your program, use sensible function names, and don’t go overboard with global variables, this shouldn’t be a problem anyway.


To prevent a symbol from being visible from outside the source file it is defined in, prefix its definition with the keyword ‘static’. This is useful for small functions which are used internally by a file, and won’t be needed by any other file.

Generating build scripts In C


Generating build scripts

The following commands have to executed in order to generate the build scripts for the project.

1. libtoolize
2. aclocal
3. autoheader
4. autoconf
5. touch README AUTHORS NEWS ChangeLog (Required for GNU software adherence)
6. automake -a

The execution of the above four commands generates the configure in the top directory and Makefile scripts in the top directory as well as each of the sub directories.

Using Automake and Autoconf


Automake and autoconf tools enable the developer to get rid of the tedium of writing complicated Makefiles for large projects and also avail portability across various platforms. These tools have been specifically designed for managing GNU projects. Software developed using automake and autoconf needs to adhere to the GNU software engineering principles.

Project directory structure


The project directory is recommeded to have the following subdirectories
and files.

src : Contains the actual source code that gets compiled. Every library shud have it’s over subdirectory. Every executable shud have it’s own subdirectory as well. If the each executable needs only one or two source files it’s sensible to keep all the source files in the same directory.

lib : An optional directory in which you place portablity code like implementations of system calls that are not avaliable of certian platforms.

• doc : Directory containing documentation for your package.

m4 : A directory containing ‘m4’ files that you package may need to install. These files define new ‘autoconf’ macros that you should make available to other developers who want to use your libraries.

intl : Portability source code which allows your program to talk in various languages.

po: Directory containing message catalogs for your software package. Automake makes it really easy to manage multi-directory source code packages so you shudn’t be shy taking advantage of it.

Managing multi-file C/C++ project


The key to better software engineering is to focus away from developing
monolithic applications that do only one job, and focus on developing libraries.
One way to think of libraries is as a program with multiple entry
points. Every library you write becomes a legacy that you can pass on to
other developers. Just like in mathematics you develop little theorems and
use the little theorems to hide the complexity in proving bigger theorems,
in software engineering you develop libraries to take care of low-level details
once and for all so that they are out of the way everytime you make a
different implementation for a variation of the problem.

On a higher level you still don’t create just one application. You create
many little applications that work together. The centralized all-in-one approach
in my experience is far less flexible than the decentralized approach
in which a set of applications work together as a team to accomplish the
goal. In fact this is the fundamental principle behind the design of the Unix
operating system. Of course, it is still important to glue together the various
components to do the job. This you can do either with scripting or with
actually building a suite of specialized monolithic applications derived from
the underlying tools.