Search results

Filters

  • Journals
  • Date

Search results

Number of results: 203
items per page: 25 50 75
Sort by:

Abstract

We address one of the weaknesses of the RSA ciphering systems i.e. the existence of the private keys that are relatively easy to compromise by the attacker. The problem can be mitigated by the Internet services providers, but it requires some computational effort. We propose the proof of concept of the GPGPU-accelerated system that can help detect and eliminate users’ weak keys. We have proposed the algorithms and developed the GPU-optimised program code that is now publicly available and substantially outperforms the tested CPU processor. The source code of the OpenSSL library was adapted for GPGPU, and the resulting code can perform both on the GPU and CPU processors. Additionally, we present the solution how to map a triangular grid into the GPU rectangular grid – the basic dilemma in many problems that concern pair-wise analysis for the set of elements. Also, the comparison of two data caching methods on GPGPU leads to the interesting general conclusions. We present the results of the experiments of the performance analysis of the selected algorithms for the various RSA key length, configurations of GPU grid, and size of the tested key set.
Go to article

Abstract

GNSS systems are susceptible to radio interference despite then operating in a spread spectrum. The commerce jammers power up to 2 watts that can block the receiver function at a distance of up to 15 kilometers in free space. Two original methods for GNSS receiver testing were developed. The first method is based on the usage of a GNSS simulator for generation of the satellite signals and a vector signal RF generator for generating different types of interference signals. The second software radio method is based on a software GNSS simulator and a signal processing in Matlab. The receivers were tested for narrowband CW interference, FM modulated signal and chirp jamming signals and scenarios. The signal to noise ratio usually drops down to 27 dBc-Hz while the jamming to signal ratio is different for different types of interference. The chirp signal is very effective. The jammer signal is well propagated in free space while in the real mobile urban and suburban environment it is usually strongly attenuated.
Go to article

Abstract

A novel non-orthogonal multiple access (NOMA) scheme is proposed to improve the throughput and the outage probability of the cognitive radio (CR) inspired system which has been implemented to adapt multiple services in the nextgeneration network (5G). In the proposed scheme, the primary source (PS) had sent a superposition code symbol with a predefined power allocation to relays, it decoded and forwarded (DF) a new superposition coded symbol to the destination with the other power allocation. By using a dual antenna at relays, it will be improved the bandwidth efficiency in such CR NOMA scheme. The performance of the system is evaluated based on the outage probability and the throughput with the assumption of the Rayleigh fading channels. According to the results obtained, it is shown that the outage probability and throughput of the proposed full-duplex (FD) in CR-NOMA with reasonable parameters can be able deploy in practical design as illustration in numerical results section.
Go to article

Abstract

Performance of standard Direction of Arrival (DOA) estimation techniques degraded under real-time signal conditions. The classical algorithms are Multiple Signal Classification (MUSIC), and Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT). There are many signal conditions hamper on its performance, such as closely spaced and coherent signals caused due to the multipath propagations of signals results in a decrease of the signal to noise ratio (SNR) of the received signal. In this paper, a novel DOA estimation technique named CW-PCA MUSIC is proposed using Principal Component Analysis (PCA) to threshold the nearby correlated wavelet coefficients of Dual-Tree Complex Wavelet transform (DTCWT) for denoising the signals before applying to MUSIC algorithm. The proposed technique improves the detection performance under closely spaced, and coherent signals with relatively low SNR conditions. Also, this method requires fewer snapshots, and less antenna array elements compared with standard MUSIC and wavelet-based DOA estimation algorithms.
Go to article

Abstract

In this paper, we propose a new algorithm that improves the performance of the operation of Handover (HO) in LTE-Advanced (LTE-A) networks. As recognized, Mobility Management (MM) is an important pillar in LTE/LTE-A systems to provide high quality of service to users on the move. The handover algorithms define the method and the steps to follow to ensure a reliable transfer of the UEs from one cell to another without interruption or degradation of the services offered by the network. In this paper, the authors proposed a new handover algorithm for LTE/LTE-A networks based on the measurement and calculation of two important parameters, namely the available bandwidth and the Received Power (RSRP) at the level of eNodeBs. The proposed scheme named LTE Available Bandwidth and RSRP Based Handover Algorithm (LABRBHA) was tested in comparison with well-known algorithms in the literature as the LHHA, LHHAARC and the INTEGRATOR scheme using the open source simulator LTE-Sim. Finally, the network performances were investigated via three indicators: the number of lost packets during the handover operation, the latency as well as the maximum system throughput. The results reported that our algorithm shows remarkable improvements over other transfer schemes.
Go to article

Abstract

One of the important issues concerning development of spatial data infrastructures (SDIs) is the carrying out of economic and financial analysis. It is essential to determine expenses and also assess effects resulting from the development and use of infrastructures. Costs and benefits assessment could be associated with assessment of the infrastructure effectiveness and efficiency as well as the infrastructure value, understood as the infrastructure impact on economic aspects of an organisational performance, both of an organisation which realises an SDI project and all users of the infrastructure. The aim of this paper is an overview of various assessment methods of investment as well as an analysis of different types of costs and benefits used for information technology (IT) projects. Based on the literature, the analysis of the examples of the use of these methods in the area of spatial data infrastructures is also presented. Furthermore, the issues of SDI projects and investments are outlined. The results of the analysis indicate usefulness of the financial methods from different fields of management in the area of SDI building, development and use. The author proposes, in addition to the financial methods, the adaptation of the various techniques used for IT investments and their development, taking into consideration the SDI specificity for the purpose of assessment of different types of costs and benefits and integration of financial aspects with non- financial ones. Among the challenges are identification and quantification of costs and benefits, as well as establishing measures which would fit the characteristics of the SDI project and artefacts resulting from the project realisation. Moreover, aspects of subjectivity and variability in time should be taken into account as the consequences of definite goals and policies as well as business context of organisation undertaking the project or using its artefacts and also investors.
Go to article

Abstract

The adjustment problem of the so-called combined (hybrid, integrated) network created with GNSS vectors and terrestrial observations has been the subject of many theoretical and applied works. The network adjustment in various mathematical spaces was considered: in the Cartesian geocentric system on a reference ellipsoid and on a mapping plane. For practical reasons, it often takes a geodetic coordinate system associated with the reference ellipsoid. In this case, the Cartesian GNSS vectors are converted, for example, into geodesic parameters (azimuth and length) on the ellipsoid, but the simple form of converted pseudo-observations are the direct differences of the geodetic coordinates. Unfortunately, such an approach may be essentially distorted by a systematic error resulting from the position error of the GNSS vector, before its projection on the ellipsoid surface. In this paper, an analysis of the impact of this error on the determined measures of geometric ellipsoid elements, including the differences of geodetic coordinates or geodesic parameters is presented. Assuming that the adjustment of a combined network on the ellipsoid shows that the optimal functional approach in relation to the satellite observation, is to create the observational equations directly for the original GNSS Cartesian vector components, writing them directly as a function of the geodetic coordinates (in numerical applications, we use the linearized forms of observational equations with explicitly specified coefficients). While retaining the original character of the Cartesian vector, one avoids any systematic errors that may occur in the conversion of the original GNSS vectors to ellipsoid elements, for example the vector of the geodesic parameters. The problem is theoretically developed and numerically tested. An example of the adjustment of a subnet loaded from the database of reference stations of the ASG-EUPOS system was considered for the preferred functional model of the GNSS observations.
Go to article

Abstract

A condition which determines the location of technical infrastructure is an entrepreneur holding the right to use the property for construction purposes. Currently, there are parallel separate legal forms allowing the use of a real property for the purpose of locating transmission lines, i.e. transmission easement (right-of-way) established under the civil law and expropriation by limiting the rights to a property under the administrative law. The aim of the study is to compare these forms conferring the right to use real properties and to analyze the related surveying and legal problems occurring in practice. The research thesis of the article is ascertainment that the current legal provisions for establishing legal titles to a property in order to locate transmission lines need to be amended. The conducted study regarded legal conditions, extent of expropriation and granting right- of-way in the city of Krakow, as well as the problems associated with the ambiguous wording of the legal regulations. Part of the research was devoted to the form of rights to land in order to carry out similar projects in some European countries (France, Czech Republic, Germany, Sweden). The justification for the analysis of these issues is dictated by the scale of practical use of the aforementioned forms of rights to land in order to locate technical infrastructure. Over the period of 2011-2014, 651 agreements were concluded on granting transmission right-of-way for 967 cadastral parcels owned by the city of Krakow, and 105 expropriation decisions were issued, limiting the use of real properties in Krakow.
Go to article

Abstract

Various sectors of the economy such as transport and renewable energy have shown great interest in sea bed models. The required measurements are usually carried out by ship-based echo sounding, but this method is quite expensive. A relatively new alternative is data obtained by airborne lidar bathymetry. This study investigates the accuracy of these data, which was obtained in the context of the project ‘Investigation on the use of airborne laser bathymetry in hydrographic surveying’. A comparison to multi-beam echo sounding data shows only small differences in the depths values of the data sets. The IHO requirements of the total horizontal and vertical uncertainty for laser data are met. The second goal of this paper is to compare three spatial interpolation methods, namely Inverse Distance Weighting (IDW), Delaunay Triangulation (TIN), and supervised Artificial Neural Networks (ANN), for the generation of sea bed models. The focus of our investigation is on the amount of required sampling points. This is analyzed by manually reducing the data sets. We found that the three techniques have a similar performance almost independently of the amount of sampling data in our test area. However, ANN are more stable when using a very small subset of points.
Go to article

Abstract

The known standard recursion methods of computing the full normalized associated Legendre functions do not give the necessary precision due to application of IEEE754-2008 standard, that creates a problems of underflow and overflow. The analysis of the problems of the calculation of the Legendre functions shows that the problem underflow is not dangerous by itself. The main problem that generates the gross errors in its calculations is the problem named the effect of “absolute zero”. Once appeared in a forward column recursion, “absolute zero” converts to zero all values which are multiplied by it, regardless of whether a zero result of multiplication is real or not. Three methods of calculating of the Legendre functions, that removed the effect of “absolute zero” from the calculations are discussed here. These methods are also of interest because they almost have no limit for the maximum degree of Legendre functions. It is shown that the numerical accuracy of these three methods is the same. But, the CPU calculation time of the Legendre functions with Fukushima method is minimal. Therefore, the Fukushima method is the best. Its main advantage is computational speed which is an important factor in calculation of such large amount of the Legendre functions as 2 401 336 for EGM2008
Go to article

Abstract

This paper analyses the use of table visual variables of statistical data of hospital beds as an important tool for revealing spatio-temporal dependencies. It is argued that some of conclusions from the data about public health and public expenditure on health have a spatio-temporal reference. Different from previous studies, this article adopts combination of cartographic pragmatics and spatial visualization with previous conclusions made in public health literature. While the significant conclusions about health care and economic factors has been highlighted in research papers, this article is the first to apply visual analysis to statistical table together with maps which is called previsualisation.
Go to article

Abstract

Lexical knowledge sources are indispensable for research, education and general information. The transition of the reference works to the digital world has been a gradual one. This paper discusses the basic principles and structure of knowledge presentation, as well as user access and knowledge acquisition with specific consideration of contributions in German. The ideal reference works of the future should be interactive, optimally adapted to the user, reliable, current and quotable.
Go to article

Abstract

The paper presents the results of investigating the effect of increase of observation correlations on detectability and identifiability of a single gross error, the outlier test sensitivity and also the response-based measures of internal reliability of networks. To reduce in a research a practically incomputable number of possible test options when considering all the non-diagonal elements of the correlation matrix as variables, its simplest representation was used being a matrix with all non-diagonal elements of equal values, termed uniform correlation. By raising the common correlation value incrementally, a sequence of matrix configurations could be obtained corresponding to the increasing level of observation correlations. For each of the measures characterizing the above mentioned features of network reliability the effect is presented in a diagram form as a function of the increasing level of observation correlations. The influence of observation correlations on sensitivity of the w -test for correlated observations (Förstner 1983,Teunissen 2006) is investigated in comparison with the original Baarda’s w -test designated for uncorrelated observations, to determine the character of expected sensitivity degradation of the latter when used for correlated observations. The correlation effects obtained for different reliability measures exhibit mutual consistency in a satisfactory extent. As a by-product of the analyses, a simple formula valid for any arbitrary correlation matrix is proposed for transforming the Baarda’s w -test statistics into the w -test statistics for correlated observations.
Go to article

Abstract

The paper presents empirical methodology of reducing various kinds of observations in geodetic network. A special case of reducing the observation concerns cartographic mapping. For numerical illustration and comparison of methods an application of the conformal Gauss-Krüger mapping was used. Empirical methods are an alternative to the classic differential and multi- stages methods. Numerical benefits concern in particular very long geodesics, created for example by GNSS vectors. In conventional methods the numerical errors of reduction values are significantly dependent on the length of the geodesic. The proposed empirical methods do not have this unfavorable characteristics. Reduction value is determined as a difference (or especially scaled difference) of the corresponding measures of geometric elements (distances, angles), wherein these measures are approximated independently in two spaces based on the known and corresponding approximate coordinates of the network points. Since in the iterative process of the network adjustment, coordinates of the points are systematically improved, approximated reductions also converge to certain optimal values.
Go to article

Abstract

In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
Go to article

Abstract

This paper presents decision-making risk estimation based on planimetric large-scale map data, which are data sets or databases which are useful for creating planimetric maps on scales of 1:5,000 or larger. The studies were conducted on four data sets of large-scale map data. Errors of map data were used for a risk assessment of decision-making about the localization of objects, e.g. for land-use planning in realization of investments. An analysis was performed for a large statistical sample set of shift vectors of control points, which were identified with the position errors of these points (errors of map data). In this paper, empirical cumulative distribution function models for decision-making risk assessment were established. The established models of the empirical cumulative distribution functions of shift vectors of control points involve polynomial equations. An evaluation of the compatibility degree of the polynomial with empirical data was stated by the convergence coefficient and by the indicator of the mean relative compatibility of model. The application of an empirical cumulative distribution function allows an estimation of the probability of the occurrence of position errors of points in a database. The estimated decision-making risk assessment is represented by the probability of the errors of points stored in the database
Go to article

This page uses 'cookies'. Learn more