Search results

Filters

  • Journals
  • Authors
  • Keywords
  • Date
  • Type

Search results

Number of results: 24
items per page: 25 50 75
Sort by:

Abstract

The object of the present study is to investigate the influence of damping uncertainty and statistical correlation on the dynamic response of structures with random damping parameters in the neighbourhood of a resonant frequency. A Non-Linear Statistical model (NLSM) is successfully demonstrated to predict the probabilistic response of an industrial building structure with correlated random damping. A practical computational technique to generate first and second-order sensitivity derivatives is presented and the validity of the predicted statistical moments is checked by traditional Monte Carlo simulation. Simulation results show the effectiveness of the NLSM to estimate uncertainty propagation in structural dynamics. In addition, it is demonstrated that the uncertainty in damping indeed influences the system response with the effects being more pronounced for lightly damped structures, higher variability and higher statistical correlation of damping parameters.
Go to article

Abstract

The aim of the paper is to point out that the Monte Carlo simulation is an easy and flexible approach when it comes to forecasting risk of an asset portfolio. The case study presented in the paper illustrates the problem of forecasting risk arising from a portfolio of receivables denominated in different foreign currencies. Such a problem seems to be close to the real issue for enterprises offering products or services on several foreign markets. The changes in exchange rates are usually not normally distributed and, moreover, they are always interdependent. As shown in the paper, the Monte Carlo simulation allows for forecasting market risk under such circumstances.
Go to article

Abstract

According to the European Environment Agency (EEA 2018), air quality in Poland is one of the worst in Europe. There are several sources of air pollution, but the condition of the air in Poland is primarily the result of the so-called low-stack emissions from the household sector. The main reason for the emission of pollutants is the combustion of low-quality fuels (mainly low-quality coal) and waste, and the use of obsolete heating boilers with low efficiency and without appropriate filters. The aim of the study was to evaluate the impact of measures aimed at reducing low-stack emissions from the household sector (boiler replacement, change of fuel type, and thermal insulation of buildings), resulting from environmental regulations, on the improvement of energy efficiency and the emission of pollutants from the household sector in Poland. Stochastic energy and mass balance models for a hypothetical household, which were used to assess the impact of remedial actions on the energy efficiency and emission of pollutants, have been developed. The annual energy consumption and emissions of pollutants were estimated for hypothetical households before and after the implementation of a given remedial action. The calculations, using the Monte Carlo simulation, were carried out for several thousand hypothetical households, for which the values of the technical parameters (type of residential building, residential building area, unitary energy demand for heating, type of heat source) were randomly drawn from probability distributions developed on the basis of the analysis of the domestic structure of households. The model takes the coefficients of correlation between the explanatory variables in the model into account. The obtained results were multiplied so that the number of hypothetical households was equal to 14.1 million, i.e. the real number of households in Poland. The obtained results allowed for identifying the potential for reducing the emission of pollutants such as carbon dioxide, carbon monoxide, dust, and nitrogen oxides, and improving the energy efficiency as a result of the proposed and implemented measures, aimed at reducing low-stack emission, resulting from the policy. The potential for emissions of gaseous pollutants is 94% for CO, 49% for NOx, 90% for dust, and 87% for SO2. The potential for improving the energy efficiency in households is around 42%.
Go to article

Abstract

The paper is concerned with issues of the estimation of random variable distribution parameters by the Monte Carlo method. Such quantities can correspond to statistical parameters computed based on the data obtained in typical measurement situations. The subject of the research is the mean, the mean square and the variance of random variables with uniform, Gaussian, Student, Simpson, trapezoidal, exponential, gamma and arcsine distributions.
Go to article

Abstract

Maximum score estimation is a class of semiparametric methods for the coefficients of regression models. Estimates are obtained by the maximization of the special function, called the score. In case of binary regression models it is the fraction of correctly classified observations. The aim of this article is to propose a modification to the score function. The modification allows to obtain smaller variances of estimators than the standard maximum score method without impacting other properties like consistency. The study consists of extensive Monte Carlo experiments.
Go to article

Abstract

The aim of the article is to present the issue of risk and related management methods, with a particular emphasis on the conditions of investment in energy infrastructure. The work consists of two main parts; the first one is the theoretical analysis of the issue, while the second discusses the application of analysis methods on the example of the investment in an agricultural biogas plant. The article presents the definitions related to the investment risk and its management, with a particular emphasis on the distinction between the risk and uncertainty. In addition, the main risk groups of the energy sector were subjected to an analysis. Then, the basic systematics and the division into particular risk groups were presented and the impact of the diversification of investments in the portfolio on the general level of risk was determined. The sources of uncertainty were discussed with particular attention to the categories of energy investments. The next part of the article presents risk mitigation methods that are part of the integrated risk management process and describes the basic methods supporting the quantification of the risk level and its effects – including the Monte Carlo (MC), Value at risk (VaR), and other methods. Finally, the paper presents the possible application of the methods presented in the theoretical part. The investment in agricultural biogas plant, due to the predictable operation accompanied by an extremely complicated and long-term investment process, was the subject of the analysis. An example of “large drawing analysis” was presented, followed by a Monte Carlo simulation and a VaR value determination. The presented study allows for determining the risk in the case of deviation of financial flows from the assumed values in particular periods and helps in determining the effects of such deviations. The conducted analysis indicates a low investment risk and suggests the ease of similar calculations for other investments.
Go to article

Abstract

We propose the adaptation of Nested Monte-Carlo Search algorithm for finding differential trails in the class of ARX ciphers. The practical application of the algorithm is demonstrated on round-reduced variants of block ciphers from the SPECK family. More specifically, we report the best differential trails,up to 9 rounds, for SPECK32.
Go to article

Abstract

The sustainable management of energy production and consumption is one of the main challenges of the 21st century. This results from the threats to the natural environment, including the negative impact of the energy sector on the climate, the limited resources of fossil fuels, as well as the unstability of renewable energy sources – despite the development of technologies for obtaining energy from the: sun, wind, water, etc. In this situation, the efficiency of energy management, both on the micro (dispersed energy) and macro (power system) scale, may be improved by innovative technological solutions enabling energy storage. Their effective implementation enables energy storage during periods of overproduction and its use in the case of energy shortages. These challenges cannot be overestimated. Modern science needs to solve various technological issues in the field of storage, organizational problems of enterprises producing electricity and heat, or issues related to the functioning of energy markets. The article presents the specificity of the operation of a combined heat and power plant with a heat accumulator in the electricity market while taking the parameters affected by uncertainty into account. It was pointed out that the analysis of the risk associated with energy prices and weather conditions is an important element of the decision-making process and management of a heat and power plant equipped with a cold water heat accumulator. The complexity of the issues and the number of variables to be analyzed at a given time are the reason for the use of advanced forecasting methods. The stochastic modeling methods are considered as interesting tools that allow forecasting the operation of an installation with a heat accumulator while taking the influence of numerous variables into account. The analysis has shown that the combined use of Monte Carlo simulations and forecasting using the geometric Brownian motion enables the quantification of the risk of the CHP plant’s operation and the impact of using the energy store on solving uncertainties. The applied methodology can be used at the design stage of systems with energy storage and enables carrying out the risk analysis in the already existing systems; this will allow their efficiency to be improved. The introduction of additional parameters of the planned investments to the analysis will allow the maximum use of energy storage systems in both industrial and dispersed power generation.
Go to article

Abstract

Basic gesture sensors can play a significant role as input units in mobile smart devices. However, they have to handle a wide variety of gestures while preserving the advantages of basic sensors. In this paper a user-determined approach to the design of a sparse optical gesture sensor is proposed. The statistical research on a study group of individuals includes the measurement of user-related parameters like the speed of a performed swipe (dynamic gesture) and the morphology of fingers. The obtained results, as well as other a priori requirements for an optical gesture sensor were further used in the design process. Several properties were examined using simulations or experimental verification. It was shown that the designed optical gesture sensor provides accurate localization of fingers, and recognizes a set of static and dynamic hand gestures using a relatively low level of power consumption.
Go to article

Abstract

When an artificial neural network is used to determine the value of a physical quantity its result is usually presented without an uncertainty. This is due to the difficulty in determining the uncertainties related to the neural model. However, the result of a measurement can be considered valid only with its respective measurement uncertainty. Therefore, this article proposes a method of obtaining reliable results by measuring systems that use artificial neural networks. For this, it considers the Monte Carlo Method (MCM) for propagation of uncertainty distributions during the training and use of the artificial neural networks.
Go to article

Abstract

Improvements of modern manufacturing techniques implies more efficient production but also new challenges for coordinate metrologists. The crucial task here is a coordinate measurement accuracy assessment. It is important because according to technological requirements, measurements are useful only when they are stated with their accuracy. Currently used methods for the measurements accuracy estimation are difficult to implement and time consuming. It is therefore important to implement correct and validated methods that will also be easy to implement. The method presented in this paper is one of them. It is an on-line accuracy estimation method based on the virtual CMM idea. A model is built using a modern LaserTracer system and a common test sphere and its implementation lasts less than one day. Results obtained using the presented method are comparable to results of commonly used uncertainty estimation methods which proves its correct functioning. Its properties predispose it to be widely used both in laboratory and industrial conditions.
Go to article

Abstract

The paper presents the core design, model development and results of the neutron transport simulations of the large Pressurized Water Reactor based on the AP1000 design. The SERPENT2.1.29 Monte Carlo reactor physics computer code with ENDF/BVII and JEFF 3.1.1 nuclear data libraries was applied. The full-core 3D models were developed according to the available Design Control Documentation and the literature. Criticality simulations were performed for the core at the Beginning of Life state for Cold Shutdown, Hot Zero Power and Full Power conditions. Selected core parameters were investigated and compared with the design data: effective multiplication factors, boron concentrations, control rod worth, reactivity coefficients and radial power distributions. Acceptable agreement between design data and simulations was obtained, confirming the validity of the model and applied methodology.
Go to article

Abstract

The paper presents a neutronic analysis of the battery-type 20 MWth high-temperature gas cooled reactor. The developed reactor model is based on the publicly available data being an ‘early design’ variant of the U-battery. The investigated core is a battery type small modular reactor, graphite moderated, uranium fueled, prismatic, helium cooled high-temperature gas cooled reactor with graphite reflector. The two core alternative designs were investigated. The first has a central reflector and 30×4 prismatic fuel blocks and the second has no central reflector and 37×4 blocks. The SERPENT Monte Carlo reactor physics computer code, with ENDF and JEFF nuclear data libraries, was applied. Several nuclear design static criticality calculations were performed and compared with available reference results. The analysis covered the single assembly models and full core simulations for two geometry models: homogenous and heterogenous (explicit). A sensitivity analysis of the reflector graphite density was performed. An acceptable agreement between calculations and reference design was obtained. All calculations were performed for the fresh core state.
Go to article

Abstract

The purpose of this study is to identify relationships between the values of the fluidity obtained by computer simulation and by an experimental test in the horizontal three-channel mould designed in accordance with the Measurement Systems Analysis. Al-Si alloy was a model material. The factors affecting the fluidity varied in following ranges: Si content 5 wt.% – 12 wt.%, Fe content 0.15 wt.% – 0.3wt. %, the pouring temperature 605°C-830°C, and the pouring speed 100 g · s–1 – 400 g · s–1. The software NovaFlow&Solid was used for simulations. The statistically significant difference between the value of fluidity calculated by the equation and obtained by experiment was not found. This design simplifies the calculation of the capability of the measurement process of the fluidity with full replacement of experiments by calculation, using regression equation.
Go to article

Abstract

This paper addresses the issue of obtaining maximum likelihood estimates of parameters for structural VAR models with a mixture of distributions. Hence the problem does not have a closed form solution, numerical optimization procedures need to be used. A Monte Carlo experiment is designed to compare the performance of four maximization algorithms and two estimation strategies. It is shown that the EM algorithm outperforms the general maximization algorithms such as BFGS, NEWTON and BHHH. Moreover, simplification of the problem introduced in the two steps quasi ML method does not worsen small sample properties of the estimators and therefore may be recommended in the empirical analysis.
Go to article

Abstract

The paper deals with a solution of radiation heat transfer problems in enclosures filled with nonparticipating medium using ray tracing on hierarchical ortho-Cartesian meshes. The idea behind the approach is that radiative heat transfer problems can be solved on much coarser grids than their counterparts from computational fluid dynamics (CFD). The resulting code is designed as an add-on to OpenFOAM, an open-source CFD program. Ortho-Cartesian mesh involving boundary elements is created based upon CFD mesh. Parametric non-uniform rational basis spline (NURBS) surfaces are used to define boundaries of the enclosure, allowing for dealing with domains of complex shapes. Algorithm for determining random, uniformly distributed locations of rays leaving NURBS surfaces is described. The paper presents results of test cases assuming gray diffusive walls. In the current version of the model the radiation is not absorbed within gases. However, the ultimate aim of the work is to upgrade the functionality of the model, to problems in absorbing, emitting and scattering medium projecting iteratively the results of radiative analysis on CFD mesh and CFD solution on radiative mesh.
Go to article

Abstract

Kolektory słoneczne są głównymi elementami solarnych systemów grzewczych. Praca tych urządzeń polega na konwersji energii promieniowania słonecznego na ciepło czynnika roboczego. Czynnikiem tym może być zarówno ciecz (glikol lub woda), jak i gaz (powietrze). Ze względu na konstrukcję wyróżnia się kolektory płaskie, próżniowe, próżniowo-rurowe i skupiające. Kolektory płaskie są stosowane przede wszystkim w budynkach, w których potrzeby cieplne są niskie lub średnie, czyli na przykład w gospodarstwach domowych. Rozwój kolektorów został ukierunkowany na zwiększenia wydajności oraz poprawy efektywności ekonomicznej inwestycji. W artykule oceniono wpływ zmiany powierzchni płaskich kolektorów słonecznych na opłacalność ekonomiczną inwestycji. Do analizy wytypowano dom jednorodzinny, zlokalizowany w województwie małopolskim, w którym instalacja przygotowania ciepłej wody użytkowej została rozbudowana o system solarny. System ten składa się z płaskich kolektorów, o łącznej powierzchni absorberów 5,61 m2. Jako czynnik roboczy w instalacji stosowany jest glikol. W celu poprawy efektu ekonomicznego zaproponowano zwiększenie powierzchni absorberów. Na podstawie trzyletnich pomiarów nasłonecznienia oraz efektów cieplnych instalacji, stworzono model ekonomiczny służący do oceny opłacalności zwiększenia powierzchni kolektorów słonecznych. Obliczenia z użyciem modelu promieniowania HDKR wykonano w środowisku Matlab dla lokalizacji Tarnów (najbliższej instalacji). Ponadto na podstawie rzeczywistych pomiarów z tej instalacji, odzwierciedlających wpływ wielu niemierzalnych czynników na efektywność przetwarzania energii słonecznej, wykonano symulacje efektu ekonomicznego dla różnych wielkości zapotrzebowania na ciepło. Otrzymane wyniki uogólniono, co daje możliwość ich wykorzystania w procesie doboru wielkości powierzchni kolektorów w przypadku podobnych instalacji.
Go to article

Abstract

This paper addresses the influence of land topography and cover on 3D radiative effects under cloudless skies in the Hornsund area, Spitsbergen, Svalbard. The authors used Monte Carlo simulations of solar radiation transfer over a heterogeneous surface to study the impact of a non-uniform surface on: (1) the spatial distribution of irradiance transmittance at the fjord surface under cloudless skies; (2) the spectral shortwave aerosol radiative forcing at the fjord surface; (3) normalized nadir radiance at the Top Of the Atmosphere (TOA) over the fjord. The modelled transmittances and radiances over the fjord are compared to the transmittances and radiances over the open ocean under the same conditions. The dependence of the 3D radiative effects on aerosol optical thickness, aerosol type, surface albedo distribution, solar azimuth and zenith angle and spectral channel is discussed. The analysis was done for channels 3 (459-479 nm) and 2 (841-876 nm) of the MODIS radiometer. In the simulations a flat water surface was assumed. The study shows that snow-covered land surrounding the fjord strongly modifies the radiation environment over the fjord surface. The enhancement of the mean irradiance transmittance over the fjord with respect to the open ocean is up to 0.06 for channel 3. The enhancement exceeds 0.11 in the vicinity of sunlit cliffs. The influence of the snow-covered land on the TOA radiance over the fjord in channel 3 is comparable to the impact of an increase in aerosol optical thickness of over 100%, and in lateral fjords of up to several hundred percent. The increase in TOA radiance is wavelength dependent. These effects may affect retrievals of aerosol optical thickness.
Go to article

Abstract

In this work, a fast 32-bit one-million-channel time interval spectrometer is proposed based on field programmable gate arrays (FPGAs). The time resolution is adjustable down to 3.33 ns (= T, the digitization/discretization period) based on a prototype system hardware. The system is capable to collect billions of time interval data arranged in one million timing channels. This huge number of channels makes it an ideal measuring tool for very short to very long time intervals of nuclear particle detection systems. The data are stored and updated in a built-in SRAM memory during the measuring process, and then transferred to the computer. Two time-to-digital converters (TDCs) working in parallel are implemented in the design to immune the system against loss of the first short time interval events (namely below 10 ns considering the tests performed on the prototype hardware platform of the system). Additionally, the theory of multiple count loss effect is investigated analytically. Using the Monte Carlo method, losses of counts up to 100 million events per second (Meps) are calculated and the effective system dead time is estimated by curve fitting of a non-extendable dead time model to the results (τNE = 2.26 ns). An important dead time effect on a measured random process is the distortion on the time spectrum; using the Monte Carlo method this effect is also studied. The uncertainty of the system is analysed experimentally. The standard deviation of the system is estimated as ± 36.6 × T (T = 3.33 ns) for a one-second time interval test signal (300 million T in the time interval).
Go to article

Abstract

The paper presents a multi-scale mathematical model dedicated to a comprehensive simulation of resistance heating combined with the melting and controlled cooling of steel samples. Experiments in order to verify the formulated numerical model were performed using a Gleeble 3800 thermo-mechanical simulator. The model for the macro scale was based upon the solution of Fourier-Kirchhoff equation as regards predicting the distribution of temperature fields within the volume of the sample. The macro scale solution is complemented by a functional model generating voluminal heat sources, resulting from the electric current flowing through the sample. The model for the micro-scale, concerning the grain growth simulation, is based upon the probabilistic Monte Carlo algorithm, and on the minimization of the system energy. The model takes into account the forming mushy zone, where grains degrade at the melting stage – it is a unique feature of the micro-solution. The solution domains are coupled by the interpolation of node temperatures of the finite element mesh (the macro model) onto the Monte Carlo cells (micro model). The paper is complemented with examples of resistance heating results and macro- and micro-structural tests, along with test computations concerning the estimation of the range of zones with diverse dynamics of grain growth.
Go to article

Abstract

Matched sampling is a methodology used to estimate treatment effects. A caliper mechanism is used to achieve better similarity among matched pairs. We investigate finite sample properties of matching with caliper and propose a slight modification to the existing mechanism. The simulation study compare performance of both methods and show that standard caliper perform well only in case of constant treatment or uniform propensity score distribution. Secondly, in a case of non-uniform distribution and non-uniform treatment the dynamic caliper method outperform standard caliper matching.
Go to article

Abstract

The paper considers the modeling and estimation of the stochastic frontier model where the error components are assumed to be correlated and the inefficiency error is assumed to be autocorrelated. The multivariate Farlie-Gumble-Morgenstern (FGM) and normal copula are used to capture both the contemporaneous and the temporal dependence between, and among, the noise and the inefficiency components. The intractable multiple integrals that appear in the likelihood function of the model are evaluated using the Halton sequence based Monte Carlo (MC) simulation technique. The consistency and the asymptotic efficiency of the resulting simulated maximum likelihood (SML) estimators of the present model parameters are established. Finally, the application of model using the SML method to the real life US airline data shows significant noise-inefficiency dependence and temporal dependence of inefficiency.
Go to article

Abstract

The paper deals with application of the Gumbel model to evaluation of the environmental loads. According to recommendations of Eurocodes, the conventional method of determining return period and characteristic values of loads utilizes the theory of extremes and implicitly assumes that the cumulative distribution function of the annual or other basic period extremes is the Gumbel distribution. However, the extreme value theory shows that the distribution of extremes asymptotically approaches the Gumbel distribution when the number of independent observations in each observation period from which the maximum is abstracted increases to infinity. Results of calculations based on simulation show that in practice the rate of convergence is very slow and significantly depends on the type of parent results distribution, values of coefficient of variation, and number of observation periods. In this connection, a straightforward purely empirical method based on fitting a curve to the observed extremes is suggested.
Go to article

This page uses 'cookies'. Learn more