Understanding Client/Server Architecture In Enterprise Database Management

Understanding Client/Server Architecture In Enterprise Database Management

0Shares

A client/server architecture is meant to facilitate sharing data processing tasks between the server (typically a high-end machine) and the clients (usually PCs). PCs may have a significant processing power and can take some raw data, which is returned by the server, and then formatting it for the needed output. Application programs get stored and executed on PCs. The network traffic may be reduced to the requests for data manipulation send from PCs to the DBMS server, and the raw data will return as a result of the request. This result of this is significantly less network traffic and, in turn, higher performance.

Nowadays, client/server architectures are effectively used in exchanging messages over the LANs. Even though there are a few older models Token Ring LANs still actively in use, most of the LANs now meet Ethernet’s standards. The database runs on a server (i.e., DBMS server) using additional disk space on the network’s storage device. The DMS and authentication server control the access to this storage database.

Structure of client/server architecture

A typical client/server architecture is somewhat similar to conventional centralized architecture on which the DBMS hosts on a single machine—however, many of the mainframes of today function as faster and larger servers. Here, the need is to handle a larger set of data even though the location of some of these processing has changed.

The client/server architecture tends to use a centralized server to host the database, which may sometimes suffer from the same set of reliability problems as traditional centralized database architecture. For example, when the server goes down, the access may be interrupted. However, with the ‘terminals’ as made with PCs, the data downloaded to a particular PC can even be processed even without access to the server.

Lookup Systems

The lookup systems help implement such a client-server architecture that maintains a blacklist of the fake URLs, and the tool at the client-side will examine it and provides a warning if that website poses any threat. The lookup systems also utilize the mechanism of collective sanctioning based on the reputation ranking mechanisms. The online communities of system users and practice provide the inputs for such blacklisting. The major online communities like Anti-Phishing Working Group have developed their databases of known spoof websites to share with the Lookup systems. Apart from this, the Lookup systems can also study the URLs directly based on the system users’ inputs.

There are various types of lookup systems available lately, and one of the most common ones is IE Phishing Filter by Microsoft. It uses a client-side whitelist combined with a server-side blacklist collected from online databases and reporting agencies. Another such option is the FirePhish7 toolbar from Mozilla Firefox and EarthLink9 etc. All these maintain a custom-made blacklist of the spoof URLs, and the users can also add more on their own. You can get a better insight into database architectures by consulting RemoteDBA.com experts.

The major benefit of Lookup Systems is that they can have a high measure of accuracy characteristically and less likely to detect an authentic site as a phony. These systems are also very easy to work with and may process faster than most available classifier systems in terms of computational power. They can compare the URLs easily against a custom list of identified phonies with greater accuracy.

Despite all these, Lookup Systems are also vulnerable to higher false negatives and may sometimes fail to identify fake websites under certain scenarios. Another limitation of the blacklisting may also be attributed to the small numbers of online resources available and the coverage area’s limitations. Say, for example, FirePhish and IE Phishing Filter tools may only amass the URLs for spoofed sites, which makes them limited against the concocted sites. Also, such lookup systems’ performance may vary based on the time of day, and the interval between the evaluation and report times.

Another challenge with Lookup Systems is that the blacklists may also contain older fake websites lists than the latest ones. This will give the impostors a higher chance of succeeding in attacks before being identified and getting blacklisted. A report shows that about 5% of the spoof site recipients tend to fall prey to the attacks despite the availability of a Lookup System.

Examples of communication paradigms

Peer-to-peer (P2P) networks

P2P networks combine client and server roles in all the peer nodes in the network. So, instead of simply leaving the data on a single centralized server, each peer participating in the P2P network acts as a server to which all the other peers can connect. Even when one or many of the nodes fail, the system does not fail as a whole. Using appropriate control information, one peer in the system can determine which other to connect to obtain the needed information.

Content delivery networks

These types of networks focus on pushing content directly from the source to the users. This is a proactive distribution model, which lets the clients quickly access copies of the content located closest to them. Ultimately, better access performance and speed are assured than contacting the original server. This content distribution requires a network support mechanism to facilitate the redirection of a query to a local copy. In this model, the type of anycast can be achieved with the manipulation of the DNS entries.

Sensor network Information fusion

Many sensor networks consist of low-power sensors that monitor the physical properties of the network environment it is involved in. Such networks can communicate by using ad hoc wireless networks that do not provide any continuous connectivity. In such applications where the data gathered by multiple sensors can be aggregated, information fusion is effectively employed. For example, to assess the maximum observed temperature, one of the given nodes can aggregate all available results from thermal sensors and compute the maximum function’s final result. In sensor networks, data access is significantly different from the conventional client-server architectures.

Hope this article helped to have a better understanding of a few less-explored crannies of client-server architecture. We will discuss the functions and troubleshooting in the forthcoming articles.

Leave a Reply

Your email address will not be published. Required fields are marked *