The main points of the UPoN-2018 talk and some valuable comments from the Audience are briefly summarized. The talk surveyed the major issues with the notion of zero-point thermal noise in resistors and its visibility; moreover it gave some new arguments. The new arguments support the old view of Kleen that the known measurement data “showing” zero-point Johnson noise are instrumental artifacts caused by the energy-time uncertainty principle. We pointed out that, during the spectral analysis of blackbody radiation, another uncertainty principle is relevant, that is, the location-momentum uncertainty principle that causes only the widening of spectral lines instead of the zero-point noise artifact. This is the reason why the Planck formula is correctly confirmed by the blackbody radiation experiments. Finally a conjecture about the zero-point noise spectrum of wide-band amplifiers is shown, but that is yet to be tested experimentally.
Low-frequency noise measurements have long been recognized as a valuable tool in the examination of quality and reliability of metallic interconnections in the microelectronic industry. While characterized by very high sensitivity, low-frequency noise measurements can be extremely time-consuming, especially when tests have to be carried out over an extended temperature range and with high temperature resolution as it is required by some advanced characterization approaches recently proposed in the literature. In order to address this issue we designed a dedicated system for the characterization of the low-frequency noise produced by a metallic line vs temperature. The system combines high flexibility and automation with excellent background noise levels. Test temperatures range from ambient temperature up to 300◦C. Measurements can be completely automated with temperature changing in pre-programmed steps. A ramp temperature mode is also possible that can be used, with proper caution, to virtually obtain a continuous plot of noise parameters vs temperature.
In this paper the authors propose a decision support system for automatic blood smear analysis based on microscopic images. The images are pre-processed in order to remove irrelevant elements and to enhance the most important ones – the healthy blood cells (erythrocytes) and the pathologic ones (echinocytes). The separated blood cells are analysed in terms of their most important features by the eigenfaces method. The features are the basis for designing the neural network classifier, learned to distinguish between erythrocytes and echinocytes. As the result, the proposed system is able to analyse the smear blood images in a fully automatic way and to deliver information on the number and statistics of the red blood cells, both healthy and pathologic. The system was examined in two case studies, involving the canine and human blood, and then consulted with the experienced medicine specialists. The accuracy of classification of red blood cells into erythrocytes and echinocytes reaches 96%.
The Kirchhoff-law-Johnson-noise (KLJN) secure key exchange scheme offers unconditional security, however it can approach the perfect security limit only in the case when the practical system’s parameters approach the ideal behavior of its core circuitry. In the case of non-ideal features, non-zero information leak is present. The study of such leaks is important for a proper design of practical KLJN systems and their privacy amplifications in order to eliminate these problems.
Malignant melanomas are the most deadly type of skin cancer, yet detected early have high chances of successful treatment. In the last twenty years, the interest in automatic recognition and classification of melanoma dynamically increased, partly because of appearing public datasets with dermatoscopic images of skin lesions. Automated computer-aided skin cancer detection in dermatoscopic images is a very challenging task due to uneven sizes of datasets, huge intra-class variation with small interclass variation, and the existence of many artifacts in the images. One of the most recognized methods of melanoma diagnosis is the ABCD method. In the paper, we propose an extended version of this method and an intelligent decision support system based on neural networks that uses its results in the form of hand-crafted features. Automatic determination of the skin features with the ABCD method is difficult due to the large diversity of images of various quality, the existence of hair, different markers and other obstacles. Therefore, it was necessary to apply advanced methods of pre-processing the images. The proposed system is an ensemble of ten neural networks working in parallel, and one network using their results to generate a final decision. This system structure enables to increase the efficiency of its operation by several percentage points compared with a single neural network. The proposed system is trained on over 5000 and tested afterwards on 200 skin moles. The presented system can be used as a decision support system for primary care physicians, as a system capable of self-examination of the skin with a dermatoscope and also as an important tool to improve biopsy decision making.
Compact radiators with circular polarization are important components of modern mobile communication systems. Their design is a challenging process which requires maintaining simultaneous control over several performance figures but also the structure size. In this work, a novel design framework for multi-stage constrained miniaturization of antennas with circular polarization is presented. The method involves se- quential optimization of the radiator in respect of selected performance figures and, eventually, the size. Optimizations are performed with iteratively increased number of design constraints. Numerical efficiency of the method is ensured using a fast local-search algorithm embedded in a trust-region framework. The proposed design framework is demonstrated using a compact planar radiator with circular polarization. The optimized antenna is characterized by a small size of 271 mm2 with 37% and 47% bandwidths in respect of 10 dB return loss and 3 dB axial ratio, respectively. The structure is benchmarked against the state-of-the-art circular polarization antennas. Numerical results are confirmed by measurements of the fabricated antenna prototype.
The paper presents a method of obtaining short-termpositioning accuracy based on micro electro-mechanical system (MEMS) sensors and analysis of the results. A high-accuracy and fast-positioning algorithm must be included due to the high risk of accidents in cities in the future, especially when autonomous objects are taken into account. High-level positioning systems should consider a number of sub-systems such as global positioning system (GPS), CCTV – video analysis, a system based on analysis of signal strength of access points (AP), etc. Short-term positioning means that there are other locating systems with a sufficiently high degree of accuracy based on, e.g. a video camera, but the located object can disappear when it is hidden by other objects, e.g. people, things, shelves etc. In such a case, MEMS sensors can be employed as a positioning system. The paper examines typical movement profiles of a radio-controlled (RC) model and fundamental filtering methods in respect of position accuracy. The authors evaluate the complexity and delay of the filter and the accuracy of the positioning in respect of the current speed and phase of movement (positive acceleration, constant) of the object. It is necessary to know whether and how the length of the filter changes the position accuracy. It has been shown that the use of fundamental filters, which provide solutions in a short time, enables to locate objects with a small error in a limited time.
According to metrological guidelines and specific legal requirements, every smart electronic electricity meter has to be constantly verified after pre-defined regular time intervals. The problem is that in most cases these pre-defined time intervals are based on some previous experience or empirical knowledge and rarely on scientifically sound data. Since the verification itself is a costly procedure it would be advantageous to put more effort into defining the required verification periods. Therefore, a fixed verification interval, recommended by various internal documents, standardised evaluation procedures and national legislation, could be technically and scientifically more justified and consequently more appropriate and trustworthy for the end user. This paper describes an experiment to determine the effect of alternating temperature and humidity and constant high current on a smart electronic electricity meter’s measurement accuracy. Based on an analysis of these effects it is proposed that the current fixed verification interval could be revised, taking into account also different climatic influence. The findings of this work could influence a new standardized procedure in respect of a meter’s verification interval.
In recent years, many scientific and industrial centres in the world developed virtual reality systems or laboratories. At present, among the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems usually consist of four, five, or six projection screens arranged in the form of a closed or hemi-closed space. The basic task of such systems is to ensure the effect of user “immersion” in the surrounding environment. The effect of user “immersion” into virtual reality in such systems is largely dependent on optical properties of the system, especially on quality of projection of three-dimensional images. In this paper, techniques of projection of three-dimensional (3D) images in CAVE-type virtual reality systems are analysed. The requirements of these techniques for such virtual reality systems are outlined. Based on the results of measurements performed in a unique CAVE-type virtual reality laboratory equipped with two different 3D projection techniques, named Immersive 3D Visualization Lab (I3DVL), that was recently opened at the Gdańsk University of Technology, the stereoscopic parameters and colour gamut of Infitec and Active Stereo stereoscopic projection techniques are examined and discussed. The obtained results enable to estimate the projection system quality for application in CAVE-type virtual reality installations.
Many researchers have contributed to creating Quantum Key Distribution (QKD) since the first protocol BB84 was proposed in 1984. One of the crucial problems in QKD is to guarantee its security with finite-key lengths by Privacy Amplification (PA). However, finite-key analyses show a trade-off between the security of BB84 and the secure key rates. This study analyses two examples to show concrete trade-offs. Furthermore, even though the QKD keys have been perceived to be arbitrarily secure, this study shows a fundamental limitation in the security of the keys by connecting Leftover Hash Lemma and Guessing Secrecy on the QKD keys.
In this paper we describe our own construction of a tuneable light source based on a set of light emitting diodes covering the visible spectrum using a homogenizing rod instead commonly used low energy-efficient integrating spheres. The expected prime application of the source is a medical endoscopic system, however it is possible to use it also for other purposes requiring both multispectral operation and a tuneable white light source. We describe the construction of the source and include precise characterization of the output white light – distribution of CCT, Duv, Δu′ v ′ and colour rendering indexes (Ra, R9, Rf , Rg) of light in several planes located at various distances. The obtained results prove that our source is characterized by very good colour rendition according to the Ra and Rf method for various correlated colour temperatures (2700–6500) K. As an example of application images of the Macbeth colour chart registered with an RGB camera included in the laboratory measurement stand are presented. The obtained results prove that, after whole system calibration, this source can be used in many applications, where evaluation of objects requires precise analysis of their colour and multispectral procedures.
In this paper, we propose and experimentally demonstrate a new method for optical frequency transfer over fibre. Instead of dual acousto-optic modulators (AOMs) as adopted in the traditional fibre phase noise compensation setup, here an active fibre phase noise compensation scheme with a single acousto-optic modulator (AOM) is used. The configuration simplifies the equipment of the user end while maintaining a high-performance optical frequency transfer stability. We demonstrate an actively stabilized coherent transfer at an optical frequency of 193.55THz over 10-km spooled fibre, obtaining a relative frequency stability (Allan deviation) of 3:84 #2; 10��16/1 s and 4:08 #2; 10��18/104 s, which is improved by about 2#24;3 orders of magnitude in comparison with the one without any phase noise compensation that achieves a relative frequency stability of 1:81 #2; 10��14/1 s and 2:48 #2; 10��15/104 s.
Reliable estimation of longitudinal force and sideslip angle is essential for vehicle stability and active safety control. This paper presents a novel longitudinal force and sideslip angle estimation method for four-wheel independent-drive electric vehicles in which the cascaded multi-Kalman filters are applied. Also, a modified tire model is proposed to improve the accuracy and reliability of sideslip angle estimation. In the design of longitudinal force observer, considering that the longitudinal force is the unknown input of the electric driving wheel model, an expanded electric driving wheel model is presented and the longitudinal force is obtained by a strong tracking filter. Based on the longitudinal force observer, taking into consideration uncertain interferences of the vehicle dynamic model, a sideslip angle estimation method is designed using the robust Kalman filter and a novel modified tire model is proposed to correct the original tire model using the estimation results of longitudinal tire forces. Simulations and experiments were carried out, and effectiveness of the proposed estimation method was verified.
This paper describes a synthetic aperture radar system for tactical-level imagery intelligence installed on board an unmanned aerial vehicle. Selected results of its tests are provided. The system contains interchange-able S-band and Ku-band linear frequency-modulated, continuous wave radar sensors that were built within a frame of a research project named WATSAR, conducted by the Military University of Technology and WB Electronics S.A. One of several algorithms of radar image synthesis, implemented in the scope of the project, is described in this paper. The WATSAR system can create online and off-line radar images.
A revision of the standard approach to characterization of thin-semiconductor-layer Hall samples has been proposed. Our results show that simple checking of I(V) curve linearity at room temperature might be insufficient for correct determination of bias conditions of a sample before measurements of Hall effect. It is caused by the nonlinear behaviour of electrical contact layers, which should be treated together with the tested layer a priori as a metal-semiconductor-metal (MSM) structure. Our approach was examined with a Be-doped p-type InAs epitaxial layer, with four gold contacts. Despite using full high-quality photolithography a significant asymmetry in maximum differential resistance (Rd) values and positions relative to zero voltage (or current) value was observed for different contacts. This suggests that such characterization should be performed before each high-precision magneto-transport measurement in order to optimize the bias conditions.
Electroencephalogram (EEG) is one of biomedical signals measured during all-night polysomnography to diagnose sleep disorders, including sleep apnoea. Usually two central EEG channels (C3-A2 and C4- A1) are recorded, but typically only one of them are used. The purpose of this work was to compare discriminative features characterizing normal breathing, as well as obstructive and central sleep apnoeas derived from these central EEG channels. The same methodology of feature extraction and selection was applied separately for the both synchronous signals. The features were extracted by combined discrete wavelet and Hilbert transforms. Afterwards, the statistical indexes were calculated and the features were selected using the analysis of variance and multivariate regression. According to the obtained results, there is a partial difference in information contained in the EEG signals carried by C3-A2 and C4-A1 EEG channels, so data from the both channels should be preferably used together for automatic sleep apnoea detection and differentiation.
To find effective and practical methods to distinguish gas-liquid two-phase flow patterns, new flow pattern maps are established using the differential pressure through a classical Venturi tube. The differential pressure signal was first decomposed adaptively into a series of intrinsic mode functions (IMFs) by the ensemble empirical mode decomposition. Hilbert marginal spectra of the IMFs showed that the flow patterns are related to the amplitude of the pressure fluctuation. The cross-correlation method was employed to sift the characteristic IMF, and then the energy ratio of the characteristic IMF to the raw signal was proposed to construct flow pattern maps with the volumetric void fraction and with the two-phase Reynolds number, respectively. The identification rates of these two maps are verified to be 91.18% and 92.65%. This approach provides a cost-effective solution to the difficult problem of identifying gas-liquid flow patterns in the industrial field.
New measurement technologies, e.g. Light Detection And Ranging (LiDAR), generate very large datasets. In many cases, it is reasonable to reduce the number of measuring points, but in such a way that the datasets after reduction satisfy specific optimization criteria. For this purpose the Optimum Dataset (OptD) method proposed in  and  can be applied. The OptD method with the use of several optimization criteria is called OptD-multi and it gives several acceptable solutions. The paper presents methods of selecting one best solution based on the assumptions of two selected numerical optimization methods: the weighted sum method and the "-constraint method. The research was carried out on two measurement datasets from Airborne Laser Scanning (ALS) and Mobile Laser Scanning (MLS). The analysis have shown that it is possible to use numerical optimization methods (often used in construction) to obtain the LiDAR data. Both methods gave different results, they are determined by initially adopted assumptions and – in relation to early made findings, these results can be used instead of the original dataset for various studies.
The accuracy and reliability of Kalman filter are easily affected by the gross errors in observations. Although robust Kalman filter based on equivalent weight function models can reduce the impact of gross errors on filtering results, the conventional equivalent weight function models are more suitable for the observations with the same noise level. For Precise Point Positioning (PPP) with multiple types of observations that have different measuring accuracy and noise levels, the filtering results obtained with conventional robust equivalent weight function models are not the best ones. For this problem, a classification robust equivalent weight function model based on the t-inspection statistics is proposed, which has better performance than the conventional equivalent weight function models in the case of no more than one gross error in a certain type of observations. However, in the case of multiple gross errors in a certain type of observations, the performance of the conventional robust Kalman filter based on the two kinds of equivalent weight function models are barely satisfactory due to the interaction between gross errors. To address this problem, an improved classification robust Kalman filtering method is further proposed in this paper. To verify and evaluate the performance of the proposed method, simulation tests were carried out based on the GPS/BDS data and their results were compared with those obtained with the conventional robust Kalman filtering method. The results show that the improved classification robust Kalman filtering method can effectively reduce the impact of multiple gross errors on the positioning results and significantly improve the positioning accuracy and reliability of PPP.
The paper presents the campaigns of mobile satellite measurements, carried out in 2009–2015 on the railway and tram lines. The accuracy of the measurement method has been analysed on the basis of the results obtained in both horizontal and vertical planes. The track axis deviation from the defined geometric shape has been analysed in the areas clearly defined in terms of geometry, i.e. on the straight sections and sections with constant longitudinal inclination. The values of measurement errors have been estimated on the basis of signals subjected to appropriate processes of filtration. The paper attempts to evaluate the changing possibilities of using the GNSS techniques to determine the shape of the railway track axis from 2009 to 2015. The determined average value of the measurement error now equals a few millimetres. This achievement is very promising for the prospects of mobile satellite measurements in railway engineering.
A metrological verification of a high precision digital multimeter was made by the laboratory of calibration of programmable electrical multifunction instruments of the National Institute of Metrological Research (INRIM) in order to verify its accuracy and stability. The instrument had been tested for a period of six months for five low-frequency electrical quantities (DC and AC Voltage and Current and DC Resistance). Its stability and precision were compared with the accuracy specifications of the manufacturer. As a new approach, a performance index of the DMM was introduced and evaluated for each examined measurement point. The DMM showed a satisfactory agreement with its specifications to be considered at the level of other top-class DMMs and even better in some measurements points.
The Hopkinson pressure bar has been developed to calibrate and assess high g accelerometers’ capacity. The extreme caution is indispensable for performing calibration of severe characteristics, like the bearable super-high overload peak and wide duration of stress. In the paper, the Hopkinson bar calibrating system is being critically appraised. A limiting formula is deduced based on the stress wave theory. It indicates that the overload peak and duration of stress are limited by the elastic limit and wave speed of Hopkinson bar material. Both stress wave configurations in the form of linear ramp and cosine functions were designed theoretically to meet typical calibrating requirements. They were confirmed experimentally with the aid of the pulse shaping technique. Their corresponding calibration characteristics were analysed critically, and it was found that the cosine stress wave can achieve the values of acceleration peak or duration by #25;=2 times greater than those obtained with the linear stress wave. Finally, some suggestions are proposed for more extreme calibration requirements.
The paper provides statistical analysis of the photographs of four various granular materials (peas, pellets, triticale, wood chips). For analysis, the (parametric) ANOVA and the (nonparametric) Kruskal-Wallis tests were applied. Additionally, the (parametric) two-sample t-test and (non-parametric) Wilcoxon Rank-Sum Test for pairwise comparisons were performed. In each case, the Bonferroni correction was used. The analysis shows a statistical evidence of the presence of differences between the respective average discrete pixel intensity distributions (PID), induced by the histograms in each group of photos, which cannot be explained only by the existing differences among single granules of different materials. The proposed approach may contribute to the development of a fast inspection method for comparison and discrimination of granular materials differing from the reference material, in the production process.
Combining surface measurement data from individual measurements of surface fragments is an issue that has been recognized for flat surfaces. The connection takes place on the principle of making ‘overlap’ measurements according to a specific measurement strategy, and then the algorithm synthesizes the measurement data for the common part (data fusion). This paper presents a method of combining partial data into one larger set using image processing methods. The purpose of the analysis is to combine surface data of a more complex shape in terms of surface roughness and waviness. A successful attempt was made to combine surface measurement data located on a cylindrical surface – convex surface. A rotated table was designed and used for surface data acquisition. The datasets were acquired with the use of CCI 6000 (366 μm – 366 μm) with the assumed overlapping of at least 20%. The measurement datasets were first pre-processed: filling in non-measured points, levelling and form re- moving were applied. For such processed datasets, the common part was identified (data registration) and then the data fusion was performed. An example of stitching the surface datasets shows usefulness of the presented methodology.