The material presents a real problem inherent in the management of computer systems, namely that of finding the appropriate system settings and thus being able to achieve the expected perfor- mance. The material also presents a prototype which aims to adapt the system in such a way as to achieve the objective, defined as the application efficiency. The prototype uses a resource-oriented mechanism that is built into the OS Workload Manager and is focused on a proposed goal-oriented subsystem based on fuzzy logic, managing resources to make the best use of them, and pursuing translation to the use of system resources, including nondeterministic technology-related factors such as duration of allocation and release of the resources, sharing the resources with the uncapped mode, and the errors of performance measurement.
An accurate use of the ability to steer computer efficiency is essential from the database point of view. Effective resource allocation is dependent on the performance indicators gathered from running systems. There must be an appropriate balance between accurate measurements, performance indicators and speed of the reallocation algorithms of the computing resources. The extended measurement of efficiency which the authors propose for applications is: the average number of queries within a time unit for particular groups of users. This paper presents an analysis of using the Workload Manager utility in the AIX 5L operating system to improve the efficiency of applications in the MySQL database environment, and an analysis of methods which allows the use of Workload Manager for steering efficiency dynamically.
In this article, authors analyze methods of the analysis of data integrity, security and availability loss results for business processes. Assessing those results, one can judge the importance of a process in organization; thus, determine which business process requires more attention. The importance of those processes can be determined with Business Impact Analysis (BIA). In article, first phase of BIA is presented – in specific, a construction of Business Impact Category Tables, Loss Levels and process weight calculation methods. A variety of weight calculating methods is presented. Authors also present their proposed method – square sum percentage – as a solution eliminating problems of other weight calculation methods in business impact analysis.
Traffic classification is an important tool for network management. It reveals the source of observed network traffic and has many potential applications e.g. in Quality of Service, network security and traffic visualization. In the last decade, traffic classification evolved quickly due to the raise of peer-to-peer traffic. Nowadays, researchers still find new methods in order to withstand the rapid changes of the Internet. In this paper, we review 13 publications on traffic classification and related topics that were published during 2009-2012. We show diversity in recent algorithms and we highlight possible directions for the future research on traffic classification: relevance of multi-level classification, importance of experimental validation, and the need for common traffic datasets.
We propose the time slot routing, a novel routing scheme that allows for a simple design of interconnection networks. The simulative results show that the proposed scheme demonstrates optimal performance at the maximal uniform network load, and for uniform loads the network throughput is greater than for deflection routing.
In the article we study a model of TCP connection with Active Queue Managementin an intermediate IP router. We use the fluid flow approximation technique to model the interactions between the set of TCP flows and AQM algoithms. Computations for fluid flow approximation model are performed in the CUDA environment.
In the article we study a model of network transmissions with Active Queue Management in an intermediate IP router. We use the OMNET++ discrete event simulator to model the varies variants of the CHOKe algoithms. We model a system where CHOKe, xCHOKe and gCHOKe are the AQM policy. The obtained results shows the behaviour of these algorithms. The paper presents also the implementation of AQM mechanisms in the router based on Linux.
This paper presents non-linear mathematical model of a computer network with a part of wireless network. The article contains an analysis of the stability of the network based on TCP-DCR, which is a modification of the traditional TCP. Block diagram of the network model was converted to a form in order to investigate the D-stability using the method of the space of uncertain parameters. Robust D-stability is calculated for constant delays values.
The predicted annual growth of energy consumption in ICT by 4% towards 2020, despite improvements and efficiency gains in technology, is challenging our ability to claim that ICT is providing overall gains in energy efficiency and Carbon Imprint as computers and networks are increasingly used in all sectors of activity. Thus we must find means to limit this increase and preserve quality of service (QoS) in computer systems and networks. Since the energy consumed in ICT is related to system load, ]this paper discusses the choice of system load that offers the best trade-off between energy consumption and QoS. We use both simple queueing models and measurements to develop and illustrate the results. A discussion is also provided regarding future research directions.
Virtual machine described in the paper is a runtime program for controllers in small distributed systems. The machine executes intermediate universal code similar to an assembler, compiled in CPDev engineering environment from source programs written in control languages of IEC 61131-3 standard. The machine is implemented as a C program, so it can run on different target platforms. Data formats and commands of the machine code are presented, together with the machine’s Petri-net model, C implementation involving universal and platform-dependent modules, target hardware interface, input/output programming mechanisms, and practical applications.
The main idea of all Active Queue Management algorithms, is to notify the TCP sender about incoming congestion by dropping packets, to prevent from the buffer overflow, and its negative consequences. However, most AQM algorithms proposed so far, neglect the impact of the high speed and long delay links. As a result, the algorithms’ efficiency, in terms of throughput and/or queue stability, is usually significantly decreased. The contribution of this paper is twofold. First of all, the performance of the well known AQM algorithms in high speed and long delay scenarios is evaluated and compared. Secondly, a new AQM algorithm is proposed, to improve the throughput in the large delay scenarios and to exclude the usage of random number generator.