The paper is aimed at presenting a study of the main limitations and problems influencing the robustness of diagnostic algorithms used in diagnostics of complex chemical processes and to present the selected exemplary solutions of how to increase it. The five major problems were identified in the study. They are associated with: uncertainties of fault detection and reasoning, changes of the diagnosed process structure, delays of fault symptoms formation and multiple faults. A brief description and exemplary solutions allowing increase of the robustness of diagnostic algorithms were given. Proposed methods were selected keeping in mind applicability for the on-line monitoring and diagnostics of complex chemical processes.
This paper presents a comprehensive metrological analysis of the Microsoft Kinect motion sensor performed using a proprietary flat marker. The designed marker was used to estimate its position in the external coordinate system associated with the sensor. The study includes calibration of the RGB and IR cameras, parameter identification and image registration. The metrological analysis is based on the data corrected for sensor optical distortions. From the metrological point of view, localization errors are related to the distance of an object from the sensor. Therefore, the rotation angles were determined and an accuracy assessment of the depth maps was performed. The analysis was carried out for the distances from the marker in the range of 0.8−1.65 m. The maximum average error was equal to 23 mm for the distance of 1.6 m.
This paper presents an enhanced internal model control (EIMC) scheme for a time-delayed second order unstable process, which is subjected to exogenous disturbance and model variations. Even though the conventional internal model control (IMC) can provide an asymptotic tracking response with desired stability margins, the major limitation of conventional IMC is that it cannot be applied for an unstable system because a small exogenous disturbance can trigger the control signal to grow unbounded. Hence, modify- ing the conventional IMC structure to guarantee the internal stability, we present an EIMC scheme which can offer better trade-off between setpoint tracking and disturbance rejec- tion characteristics. To improve the load disturbance rejection characteristics and attenuate the effect of sensor noise, we solve the selection of controller gains as an H¥ optimization problem. One of the key aspects of the EIMC scheme is that the robustness of the closed loop system can be tuned via a single tuning parameter. The performance of the EIMC scheme is experimentally assessed on a magnetic levitation plant for reference tracking application. Experimental results substantiate that the EIMC scheme can effectively coun- teract the inherent time delay in the model and offer precise tracking, even in the presence of exogenous disturbance. Moreover, by comparing the trajectory tracking performance of EIMC with that of the proportional integral velocity (PIV) controller through cumulative power spectral density (CPSD) of the tracking error, we show that the EIMC can offer better low frequency servo response with minimal vibrations.
This article takes up the matter of contemporary threats to cities and urbanity, setting the problems cities face today against the background of the two categories of the resilient city and the city developing sustainably. The author describes and presents the evolution of the sustainable development concept as such, as well as the generational change in priorities that has taken place where the development of urbanised areas is concerned, given the way the concept has undergone a certain devaluation, in the light of its failure to achieve fulfi lment. The challenges cities face today require multi-faceted activity, in respect of increased inclusivity, robustness and resilience, and flexibility. This leaves today’s idea of the resilient city embracing old elements of the sustainable city, but also augmenting them in various ways.
Although the explicit commutativitiy conditions for second-order linear time-varying systems have been appeared in some literature, these are all for initially relaxed systems. This paper presents explicit necessary and sufficient commutativity conditions for commutativity of second-order linear time-varying systems with non-zero initial conditions. It has appeared interesting that the second requirement for the commutativity of non-relaxed systems plays an important role on the commutativity conditions when non-zero initial conditions exist. Another highlight is that the commutativity of switched systems is considered and spoiling of commutativity at the switching instants is illustrated for the first time. The simulation results support the theory developed in the paper.
A product is referred to as robust when its performance is consistent. In current product robustness paradigms, robustness is the responsibility of engineering design. Drawings and 3D models should be released to manufacturing after applying all the possible robust design principles. But there are no methods referred for manufacturing to carry and improve product robustness after the design freeze. This paper proposes a process of inducing product robustness at all stages of product development from design release to the start of mass production. A manufacturing strategy of absorbing all obvious variations and an approach of turning variations to cancel one another are defined. Verified the application feasibility and established the robustness quantification method at each stage. The theoretical and actual sensitivity of different parameters is identified as indicators. Theoretical and actual performance variation and accuracy of estimation are established as robustness metric. Manufacturing plan alignment to design, complimenting the design and process sensitivities, countering process mean shifts with tool deviations, higher adjustable assembly tools are enablers to achieve product robustness.
In this paper, a robust and perceptually transparent single-level and multi-level blind audio watermarking scheme using wavelets is proposed. A randomly generated binary sequence is used as a watermark, and wavelet function coding is used to embed the watermark sequence in audio signals. Multi-level watermarking is used to enhance payload capacity and can be used for a different level of security. The robustness of the scheme is evaluated by applying different attacks such as filtering, sampling rate alteration, compression, noise addition, amplitude scaling, and cropping. The simulation results obtained show that the proposed watermarking scheme is resilient to various attacks except cropping. Perceptual transparency of watermark is measured by using Perceptual Evaluation of Audio Quality (PEAQ) basic model of ITU-R (PEAQ ITU-R BS.1387) on Speech Quality Assessing Material (SQAM) given by European Broadcasting Union (EBU). Average Objective Difference Grade (ODG) measured for this method is -0.067 and -0.080 for single-level and multi-level watermarked audio signals, respectively. In the proposed single-level digital audio watermarking scheme, the payload capacity is increased by 19.05% as compared to the single-level Chirp-Based Digital Audio Watermarking (CB-DAWM) scheme.
Redundancy based methods are proactive scheduling methods for solving the Project Scheduling Problem (PSP) with non-deterministic activities duration. The fundamental strategy of these methods is to estimate the activities duration by adding extra time to the original duration. The extra time allows to consider the risks that may affect the activities durations and to reduce the number of adjustments to the baseline generated for the project. In this article, four methods based on redundancies were proposed and compared from two robustness indicators. These indicators were calculated after running a simulation process. On the other hand, linear programming was applied as the solution technique to generate the baselines of 480 projects analyzed. Finally, the results obtained allowed to identify the most adequate method to solve the PSP with probabilistic activity duration and generate robust baselines.
The paper focuses on the problem of robust fault detection using analytical methods and soft computing. Taking into account the model-based approach to Fault Detection and Isolation (FDI), possible applications of analytical models, and first of all observers with unknown inputs, are considered. The main objective is to show how to employ the bounded-error approach to determine the uncertainty of soft computing models (neural networks and neuro-fuzzy networks). It is shown that based on soft computing models uncertainty defined as a confidence range for the model output, adaptive thresholds can be described. The paper contains a numerical example that illustrates the effectiveness of the proposed approach for increasing the reliability of fault detection. A comprehensive simulation study regarding the DAMADICS benchmark problem is performed in the final part.