Search results

Filters

  • Journals
  • Keywords
  • Date

Search results

Number of results: 16
items per page: 25 50 75
Sort by:

Abstract

Describing the gas boiler fuel consumption as a time series gives the opportunity to use tools appropriate for the processing of such data to analyze this phenomenon. One of them are ARIMA models. The article proposes this type of model to be used for predicting monthly gas consumption in a boiler room working for heating and hot water preparation. The boiler supplies heat to a group of residential buildings. Based on the collected data, three specific models were selected for which the forecast accuracy was assessed. Calculations and analyses were carried out in the R environment using “forecast” and “ggplot2” packages. A good quality of the obtained forecasts has been demonstrated, confirming the usefulness of the proposed analytical tools. The article summary also indicates for what purposes the forecasts obtained in this way can be used. They can be useful for diagnosing the correct operation of a heat source. Registering fuel consumption at a level significantly deviating from the forecast should be a signal to immediately diagnose the boiler room and the heat supply system and to explain the reason for this difference. In this way, it is possible to detect irregularities in the operation of the heat supply system before they are detected by traditional methods. The gas consumption forecast is also useful for optimizing the financial management of the property manager responsible for the operation of the boiler room. On this basis, operating fees or financial operations with the use of periodic surplus capital may be planned.
Go to article

Abstract

Volatility persistence is a stylized statistical property of financial time-series data such as exchange rates and stock returns. The purpose of this letter is to investigate the relationship between volatility persistence and predictability of squared returns.
Go to article

Abstract

This paper points out that the ARMA models followed by GARCH squares are volatile and gives explicit and general forms of their dependent and volatile innovations. The volatility function of the ARMA innovations is shown to be the square of the corresponding GARCH volatility function. The prediction of GARCH squares is facilitated by the ARMA structure and predictive intervals are considered. Further, the developments suggest families of volatile ARMA processes.
Go to article

Abstract

Monitoring of permanent stations that make up the reference frame is an integral part of the geodesists work. Selection of reference stations is based on analysis of parameters characterizing them (hardware, coordinates’ stability, mounting, location). In this paper, we took into account phase residual as an indicator of unmodelled signal. Phase residuals were computed based on ASG-EUPOS and EPN observation processing. The results show the connection between the method of mounting the antenna and the residuals. We have reviewed multipath effect at ASG-EUPOS stations, and chosen those which are characterized by the highest value of phase residual. The results show that LC phase residual is a good factor to characterize site’s solutions’ reliability. For majority of sites RMS values were less than 10 mm. Modulations associated with multipath effect were observed for few ASG-EUPOS sites only. Phase residuals are distributed specifically for sites, which antennas are mounted on pillars (more common for EPN sites). For majority of analysed sites phase residual distribution was similar for different days and did not depend directly on atmosphere condition.
Go to article

Abstract

The paper presents a summary of research activities concerning theoretical geodesy performed in Poland in the period of 2011–2014. It contains the results of research on new methods of the parameter estimation, a study on robustness properties of the M-estimation, control network and deformation analysis, and geodetic time series analysis. The main achievements in the geodetic parameter estimation involve a new model of the M-estimation with probabilistic models of geodetic observations, a new Shift-M split estimation, which allows to estimate a vector of parameter differences and the Shift- M split (+) that is a generalisation of Shift- M split estimation if the design matrix A of a functional model has not a full column rank. The new algorithms of the coordinates conversion between the Cartesian and geodetic coordinates, both on the rotational and triaxial ellipsoid can be mentioned as a highlights of the research of the last four years. New parameter estimation models developed have been adopted and successfully applied to the control network and deformation analysis. New algorithms based on the wavelet, Fourier and Hilbert transforms were applied to find time-frequency characteristics of geodetic and geophysical time series as well as time-frequency relations between them. Statistical properties of these time series are also presented using different statistical tests as well as 2 nd , 3 rd and 4 th moments about the mean. The new forecasts methods are presented which enable prediction of the considered time series in different frequency bands.
Go to article

Abstract

When observations are autocorrelated, standard formulae for the estimators of variance, s2, and variance of the mean, s2 (x), are no longer adequate. They should be replaced by suitably defined estimators, s2a and s2a (x), which are unbiased given that the autocorrelation function is known. The formula for s2a was given by Bayley and Hammersley in 1946, this work provides its simple derivation. The quantity named effective number of observations neff is thoroughly discussed. It replaces the real number of observations n when describing the relationship between the variance and variance of the mean, and can be used to express s2a and s2a (x) in a simple manner. The dispersion of both estimators depends on another effective number called the effective degrees of freedom Veff. Most of the formulae discussed in this paper are scattered throughout the literature and not very well known, this work aims to promote their more widespread use. The presented algorithms represent a natural extension of the GUM formulation of type-A uncertainty for the case of autocorrelated observations.
Go to article

Abstract

This paper proposes a soft sensing method of least squares support vector machine (LS-SVM) using temperature time series for gas flow measurements. A heater unit has been installed on the external wall of a pipeline to generate heat pulses. Dynamic temperature signals have been collected upstream of the heater unit. The temperature time series are the main secondary variables of soft sensing technique for estimating the flow rate. A LS-SVM model is proposed to construct a non-linear relation between the flow rate and temperature time series. To select its inputs, parameters of the measurement system are divided into three categories: blind, invalid and secondary variables. Then the kernel function parameters are optimized to improve estimation accuracy. The experiments have been conducted both in the single-pulse and multiple-pulse heating modes. The results show that estimations are acceptable.
Go to article

Abstract

The summary of research activities concerning general theory and methodology performed in Poland in the period of 2015–2018 is presented as a national report for the 27th IUGG (International Union of Geodesy and Geophysics) General Assembly. It contains the results of research on new or improved methods and variants of robust parameter estimation and their application, especially to control network analysis. Reliability analysis of the observation system and an integrated adjustment approach are also given. The identifiability (ID) index as a new measure for minimal detectable bias (MDB) in the observation system of a network, has been introduced. A new method of covariance function parameter estimation in the least squares collocation has been developed. The robustified version of the Shift-Msplit estimation, termed as Shift-M*split estimation, which enables estimation of parameter differences (robustly), without the need of prior estimation of the parameters, has been introduced. Results on the analysis of geodetic time series, particularly Earth orientation parameter time series, geocenter time series, permanent station coordinates and sea level variation time series are also provided in this review paper. The entire bibliography of related works is provided in the references.
Go to article

Abstract

The article presents results of the influence of the GMDH (Group Method of Data Handling) neural network input data preparation method on the results of predicting corrections for the Polish timescale UTC(PL). Prediction of corrections was carried out using two methods, time series analysis and regression. As appropriate to these methods, the input data was prepared based on two time series, ts1 and ts2. The implemented research concerned the designation of the prediction errors on certain days of the forecast and the influence of the quantity of data on the prediction error. The obtained results indicate that in the case of the GMDH neural network the best quality of forecasting for UTC(PL) can be obtained using the time-series analysis method. The prediction errors obtained did not exceed the value of ± 8 ns, which confirms the possibility of maintaining the Polish timescale at a high level of compliance with the UTC.
Go to article

Abstract

The paper presents local dynamic approach to integration of an ensemble of predictors. The classical fusing of many predictor results takes into account all units and takes the weighted average of the results of all units forming the ensemble. This paper proposes different approach. The prediction of time series for the next day is done here by only one member of an ensemble, which was the best in the learning stage for the input vector, closest to the input data actually applied. Thanks to such arrangement we avoid the situation in which the worst unit reduces the accuracy of the whole ensemble. This way we obtain an increased level of statistical forecasting accuracy, since each task is performed by the best suited predictor. Moreover, such arrangement of integration allows for using units of very different quality without decreasing the quality of final prediction. The numerical experiments performed for forecasting the next input, the average PM10 pollution and forecasting the 24-element vector of hourly load of the power system have confirmed the superiority of the presented approach. All quality measures of forecast have been significantly improved.
Go to article

Abstract

Following the results presented in [21], we present an efficient approach to the Schur parametrization/modeling of a subclass of second-order time-series which we term p-stationary time-series, yielding a uniform hierarchy of algorithms suitable for efficient implementations and being a good starting point for nonlinear generalizations to higher-order non-Gaussian nearstationary time-series.
Go to article

Abstract

Prior knowledge of the autocorrelation function (ACF) enables an application of analytical formalism for the unbiased estimators of variance s2a and variance of the mean s2a(xmacr;). Both can be expressed with the use of so-called effective number of observations neff. We show how to adopt this formalism if only an estimate {rk} of the ACF derived from a sample is available. A novel method is introduced based on truncation of the {rk} function at the point of its first transit through zero (FTZ). It can be applied to non-negative ACFs with a correlation range smaller than the sample size. Contrary to the other methods described in literature, the FTZ method assures the finite range 1 < neff ≤ n for any data. The effect of replacement of the standard estimator of the ACF by three alternative estimators is also investigated. Monte Carlo simulations, concerning the bias and dispersion of resulting estimators sa and sa(×), suggest that the presented formalism can be effectively used to determine a measurement uncertainty. The described method is illustrated with the exemplary analysis of autocorrelated variations of the intensity of an X-ray beam diffracted from a powder sample, known as the particle statistics effect.
Go to article

Abstract

The aim of the article is to construct an asymptotically consistent test, based on a subsampling approach, to verify hypothesis about existence of the individual or common deterministic cycle in coordinates of multivariate macroeconomic time series. By the deterministic cycle we mean the periodic or almost periodic fluctuations in the mean function in cyclical fluctuations. To construct test we formulate a multivariate non-parametric model containing the business cycle component in the unconditional mean function. The construction relies on the Fourier representation of the unconditional expectation of the multivariate Almost Periodically Correlated time series and is related to fixed deterministic cycle presented in the literature. The analysis of the existence of common deterministic business cycles for selected European countries is presented based on monthly industrial production indexes. Our main findings from the empirical part is that the deterministic cycle can be strongly supported by the data and therefore should not be automatically neglected during analysis without justification.
Go to article

Abstract

We study the autocovariance structure of a general Markov switching second-order stationary VARMA model.Then we give stable finite order VARMA(p*, q*) representations for those M-state Markov switching VARMA(p, q) processes where the observables are uncorrelated with the regime variables. This allows us to obtain sharper bounds for p* and q* with respect to the ones existing in literature. Our results provide new insights into stochastic properties and facilitate statistical inference about the orders of MS-VARMA models and the underlying number of hidden states.
Go to article

Abstract

The aim of the paper was an attempt at applying the time-series analysis to the control of the melting process of grey cast iron in production conditions. The production data were collected in one of Polish foundries in the form of spectrometer printouts. The quality of the alloy was controlled by its chemical composition in about 0.5 hour time intervals. The procedure of preparation of the industrial data is presented, including OCR-based method of transformation to the electronic numerical format as well as generation of records related to particular weekdays. The computations for time-series analysis were made using the author’s own software having a wide range of capabilities, including detection of important periodicity in data as well as regression modeling of the residual data, i.e. the values obtained after subtraction of general trend, trend of variability amplitude and the periodical component. The most interesting results of the analysis include: significant 2-measurements periodicity of percentages of all components, significance 7-day periodicity of silicon content measured at the end of a day and the relatively good prediction accuracy obtained without modeling of residual data for various types of expected values. Some practical conclusions have been formulated, related to possible improvements in the melting process control procedures as well as more general tips concerning applications of time-series analysis in foundry production.
Go to article

This page uses 'cookies'. Learn more