The essay presents an original application of using the coolhunting method to discover new trends in architecture and design. The ability to identify trends is tied in with the possibility of attaining an advantage over the competition with the use of new designs that can become hits on the market, gaining the favor of customers. The term coolhunting can be broadly defined as the pursuit of inspiration and the forecasting of the directions of development. Initially, the term was applied to fashion, but quickly spread to other spheres of activity, like music, the arts, lifestyle and finally, to architecture and design. The essay is a slightly altered and improved rendition of the author's article published in Zastosowania ergonomii. Wybrane kierunki badań ergonomicznych w roku 2014 . (ed. Charytonowicz J.), Publ. Polskie Towarzystwo Ergonomiczne PTErg, o/Wrocław, 2014, p. 289-304. The method outlined therein is the result of research conducted under the author's supervision at the Institute of Architecture and Spatial Planning of the Poznań University of Technology between the years 2012 and 2014.
The aim of the paper is to point out that the Monte Carlo simulation is an easy and flexible approach when it comes to forecasting risk of an asset portfolio. The case study presented in the paper illustrates the problem of forecasting risk arising from a portfolio of receivables denominated in different foreign currencies. Such a problem seems to be close to the real issue for enterprises offering products or services on several foreign markets. The changes in exchange rates are usually not normally distributed and, moreover, they are always interdependent. As shown in the paper, the Monte Carlo simulation allows for forecasting market risk under such circumstances.
The irregularity profiles of steel samples after vapour blasting were measured. A correlation analysis of profile parameters was then carried out. As the result, the following parameters were selected: Pq, Pt, PDq, Pp/Pt and Pku. Surface profiles after vapour blasting were modeled. The modeled surfaces were correctly matched to measured surfaces in 78% of all analyzed cases. The vapour blasting experiment was then carried out using an orthogonal selective research plan. The distance between the nozzle and sample d and the pressure of feed system p were input parameters; selected surface texture coefficients were output parameters. As the result of the experiment, regression equations connecting vapour blasting process parameters p and d with selected profile parameters were obtained. Finally, 2D profiles of steel samples were forecasted for various values of vapour blasting parameters. Proper matching accuracy of modeled to measured profiles was assured in 75% of analyzed cases.
The article is an attempt to evaluate accuracy of Marx’s predictions and to present some reasons for Marx’s ineffectiveness as a forecaster. The article discusses contemporary research on forecasting, uses the results to Marx, and analyses the dialectic aspect of laws in order to explain forecasting weaknesses of Marx. The author of Capital turns out to be – in P.E. Tetlock’s typology – a ‘hedgehog’, i.e.: a bad forecaster, who uses questionable methods to defend his predictions at all costs.
The never before published paper is one of the last writings of Juliusz Żórawski (1898–1967), professor architect and theoretician of architecture. The notion of limited complexity introduced here relates to individual characteristics of the conceptual abilities of man. Tasks of architecture are based on prognoses, and this brings with it the risk of making errors. The author criticises J. Fourastié’s prognoses related to the Earth’s overpopulation in 3000 AD, which would force building new cities above the ground, contrary to human psychosomatic nature and habitude.
Potato white mold caused by Sclerotinia sclerotiorum is an important plant disease occurring in many potato-producing areas throughout the world. In this study, a specific diagnostic method was used to detect and quantify S. sclerotiorum ascospores, and its forecasting ability was assessed in potato fields during flowering periods of 2011 to 2014 in Bahar County, Hamedan Province. Using GenEMBL database, a primer pair, HZSCREV and HZSCFOR, was designed and optimized for the pathogen. After testing the sensitivity of primers, DNA was extracted from samples of outdoor Burkard traps from potato fields. A linear association was observed between pathogen DNA and the number of ascospores using the quantitative PCR (qPCR) technique in the presence of SYBR dye. The qPCR could successfully detect DNA amounts representing two S. sclerotiorum ascospores and was not sensitive to a variety of tested fungi such as Botrytis cinerea, Alternaria brassicae, Fusarium solani. In contrast to the amount of rainfall, a direct relationship was found between ascospore numbers and the incidence of potato white mold from 2011 to 2014.
In this paper we show that in the lognormal discrete-time stochastic volatility model with predictable conditional expected returns, the conditional expected value of the discounted payoff of a European call option is infinite. Our empirical illustration shows that the characteristics of the predictive distributions of the discounted payoffs, obtained using Monte Carlo methods, do not indicate directly that the expected discounted payoffs are infinite.
In order to prepare a coal company for the development of future events, it is important to predict how can evolve the key environmental factors. This article presents the most important factors influencing the hard coal demand in Poland. They have been used as explanatory variables during the creation of a mathematical model of coal sales. In order to build the coal sales forecast, the authors used the ARMAX model. Its validation was performed based on such accuracy measures as: RMSE, MAPE and Theil’s index. The conducted studies have allowed the statistically significant factors out of all factors taken into account to be identified. They also enabled the creation of the forecast of coal sales volume in Poland in the coming years. To maintain the predictability of the forecast, the mining company should continually control the macro environment. The proper demand forecast allows for the flexible and dynamic adjustment of production or stock levels to market changes. It also makes it possible to adapt the product range to the customer’s requirements and expectations, which, in turn, translates into increased sales, the release of funds, reduced operating costs and increased financial liquidity of the coal company. Creating a forecast is the first step in planning a hard coal mining strategy. Knowing the future needs, we are able to plan the necessary level of production factors in advance. The right strategy, tailored to the environment, will allow the company to eliminate unnecessary costs and to optimize employment. It will also help the company to fully use machines and equipment and production capacity. Thanks to these efforts, the company will be able to reduce production costs and increase operating profit, thus survive in a turbulent environment.
The paper presents selected issues related to the development of international coal markets. World consumption of coal dropped for the second year in a row in 2016, primarily due to the lower demand from China and the US. The share of coal in global primary energy consumption decreased to 28%. World coal production accounted to 3.66 billion toe and it was lower by 6.2% when compared to the previous year. More than 60% of this decline took place in China. The decline in global production was more than four times higher than the decrease in consumption. The sufficiency of the world resources of coal are estimated at 153 years – that is three times more than the sufficiency of oil and gas resources. After several years of decline, coal prices increased by 77% in 2016. The current spot prices are at the level of $80/ton and are close to the 2014 prices. In the European market, after the first half of the year, coal prices reached the level of around 66% higher than in the same period of the last year. The average price in the first half amounted to PLN 12.6/GJ, which is close to the 2012 prices. The share of spot trade in the total purchase amount accounted to approx. 20%. Prices in futures contracts can be estimated on the basis of the Japan-Australia contracts prices and prices in supplies to power plants located in Germany. On average, the prices in supplies to these power plants were higher by approximately 9% in the years 2010 – 2016 and prices in Australia – Japan contracts were 12% higher than CIF ARA prices in 2017. Global energy coal trade reached about 1.012 billion tons in 2016. A decline by 4.8% is expected in 2019 primarily due to the expected reduction in demand in major importing countries in Asia.
The methane hazard is one of the most dangerous phenomena in hard coal mining. In a certain range of concentrations, methane is flammable and explosive. Therefore, in order to maintain the continuity of the production process and the safety of work for the crew, various measures are taken to prevent these concentration levels from being exceeded. A significant role in this process is played by the forecasting of methane concentrations in mine headings. This very problem has been the focus of the present article. Based on discrete measurements of methane concentration in mine headings and ventilation parameters, the distribution of methane concentration levels in these headings was forecasted. This process was performed on the basis of model-based tests using the Computational Fluid Dynamics (CFD). The methodology adopted was used to develop a structural model of the region under analysis, for which boundary conditions were adopted on the basis of the measurements results in real-world conditions. The analyses conducted helped to specify the distributions of methane concentrations in the region at hand and determine the anticipated future values of these concentrations. The results obtained from model-based tests were compared with the results of the measurements in realworld conditions. The methodology using the CFD and the results of the tests offer extensive possibilities of their application for effective diagnosis and forecasting of the methane hazard in mine headings.
This paper researches the application of grey system theory in cost forecasting of the coal mine. The grey model (GM(1.1)) is widely used in forecasting in business and industrial systems with advantages of minimal data, a short time and little fluctuation. Also, the model fits exponentially with increasing data more precisely than other prediction techniques. However, the traditional GM(1.1) model suffers from the poor anti-interference ability. Aimed at the flaws of the conventional GM(1.1) model, this paper proposes a novel dynamic forecasting model with the theory of background value optimization and Fourier-series residual error correction based on the traditional GM(1.1) model. The new model applies the golden segmentation optimization method to optimize the background value and Fourier-series theory to extract periodic information in the grey forecasting model for correcting the residual error. In the proposed dynamic model, the newest data is gradually added while the oldest is removed from the original data sequence. To test the new model’s forecasting performance, it was applied to the prediction of unit costs in coal mining, and the results show that the prediction accuracy is improved compared with other grey forecasting models. The new model gives a MAPE & C value of 0.14% and 0.02, respectively, compared to 1.75% and 0.37 respectively for the traditional GM(1.1) model. Thus, the new GM(1.1) model proposed in this paper, with advantages of practical application and high accuracy, provides a new method for cost forecasting in coal mining, and then help decision makers to make more scientific decisions for the mining operation.
In this article, we review the research state of the bullwhip effect in supply chains with stochastic lead times. We analyze problems arising in a supply chain when lead times are not deterministic. Using real data from a supply chain, we confirm that lead times are stochastic and can be modeled by a sequence of independent identically distributed random variables. This underlines the need to further study supply chains with stochastic lead times and model the behavior of such chains.
The sustainable management of energy production and consumption is one of the main challenges of the 21st century. This results from the threats to the natural environment, including the negative impact of the energy sector on the climate, the limited resources of fossil fuels, as well as the unstability of renewable energy sources – despite the development of technologies for obtaining energy from the: sun, wind, water, etc. In this situation, the efficiency of energy management, both on the micro (dispersed energy) and macro (power system) scale, may be improved by innovative technological solutions enabling energy storage. Their effective implementation enables energy storage during periods of overproduction and its use in the case of energy shortages. These challenges cannot be overestimated. Modern science needs to solve various technological issues in the field of storage, organizational problems of enterprises producing electricity and heat, or issues related to the functioning of energy markets. The article presents the specificity of the operation of a combined heat and power plant with a heat accumulator in the electricity market while taking the parameters affected by uncertainty into account. It was pointed out that the analysis of the risk associated with energy prices and weather conditions is an important element of the decision-making process and management of a heat and power plant equipped with a cold water heat accumulator. The complexity of the issues and the number of variables to be analyzed at a given time are the reason for the use of advanced forecasting methods. The stochastic modeling methods are considered as interesting tools that allow forecasting the operation of an installation with a heat accumulator while taking the influence of numerous variables into account. The analysis has shown that the combined use of Monte Carlo simulations and forecasting using the geometric Brownian motion enables the quantification of the risk of the CHP plant’s operation and the impact of using the energy store on solving uncertainties. The applied methodology can be used at the design stage of systems with energy storage and enables carrying out the risk analysis in the already existing systems; this will allow their efficiency to be improved. The introduction of additional parameters of the planned investments to the analysis will allow the maximum use of energy storage systems in both industrial and dispersed power generation.
This study involves the implementation of an economic order quantity (EOQ) model which is an inventory control method in a ceramic factory. Two different methods were applied for the calculation of EOQs. The first method is to determine EOQ values using a response surface method-based approach (RSM). The second method uses conventional EOQ calculations. To produce a ceramic product, 281 different and additive materials may be used. First, Pareto (ABC) analysis was performed to determine which of the materials have higher priority. Because of this analysis, the value of 21 items among 281 different materials and additives were compared to the ratio of the total product. The ratio was found to be 70.4% so calculations were made for 21 items. Usage value for every single item for the years 2011, 2012, 2013 and 2014, respectively, were obtained from the company records. Eight different demand forecasting methods were applied to find the amount of the demand in EOQ. As a result of forecasting, the EOQ of the items were calculated by establishing a model. Also, EOQ and RSM calculations for the items were made and both calculation results were compared to each other. Considering the obtained results, it is understood that RSM can be used in EOQ calculations rather than the conventional EOQ model. Also, there are big differences between the EOQ values which were implemented by the company and the values calculated. Because of this work, the RSM-based EOQ approach can be used to decide on the EOQ calculations as a way of improving the system performance.
This paper discusses the challenges faced by the empirical macroeconomist and methods for surmounting them. These challenges arise due to the fact that macroeconometric models potentially include a large number of variables and allow for time variation in parameters. These considerations lead to models which have a large number of parameters to estimate relative to the number of observations. A wide range of approaches are surveyed which aim to overcome the resulting problems. We stress the related themes of prior shrinkage, model averaging and model selection. Subsequently, we consider a particular modelling approach in detail. This involves the use of dynamic model selection methods with large TVP-VARs. A forecasting exercise involving a large US macroeconomic data set illustrates the practicality and empirical success of our approach.
In many research studies it is argued that it is possible to extract useful information about future real economic activity from the performance of financial markets. However, this study goes further and shows that it is not only possible to use expectations derived from financial markets to forecast future economic activity, but that data about the financial system can be used for this purpose as well. This paper sheds light on the ability to forecast real economic activity, based on additional and different financial variables than what have been presented so far. The research is conducted for the Polish emerging economy on the basis of monthly data. The results suggest that, based purely on the data from the financial system, it is possible to construct reasonable measures that can, even for an emerging economy, effectively forecast future real economic activity. The outcomes are proved by two different econometric methods, namely, by a time series analysis and by a probit model. All presented models are tested in-sample and out-of-sample.
Bayesian VAR (BVAR) models offer a practical solution to the parameter proliferation concerns as they allow to introduce a priori information on seasonality and persistence of inflation in a multivariate framework. We investigate alternative prior specifications in the case of time series with a clear seasonal pattern. In the empirical part we forecast the monthly headline inflation in the Polish economy over the period 2011‒2014 employing two popular BVAR frameworks: a steady-state reduced-form BVAR and just-identified structural BVAR model. To evaluate the forecast performance we use the pseudo real-time vintages of timely information from consumer and financial markets. We compare different models in terms of both point and density forecasts. Using formal testing procedure for density-based scores we provide the empirical evidence of superiority of the steady-state BVAR specifications with tight seasonal priors.
The dynamic development of wind power in recent years has generated the demand for production forecasting tools in wind farms. The data obtained from mathematical models is useful both for wind farm owners and distribution and transmission system operators. The predictions of production allow the wind farm operator to control the operation of the turbine in real time or plan future repairs and maintenance work in the long run. In turn, the results of the forecasting model allow the transmission system operator to plan the operation of the power system and to decide whether to reduce the load of conventional power plants or to start the reserve units. The presented article is a review of the currently applied methods of wind power generation forecasting. Due to the nature of the input data, physical and statistical methods are distinguished. The physical approach is based on the use of data related to atmospheric conditions, terrain, and wind farm characteristics. It is usually based on numerical weather prediction models (NWP). In turn, the statistical approach uses historical data sets to determine the dependence of output variables on input parameters. However, the most favorable, from the point of view of the quality of the results, are models that use hybrid approaches. Determining the best model turns out to be a complicated task, because its usefulness depends on many factors. The applied model may be highly accurate under given conditions, but it may be completely unsuitable for another wind farm.
The literature on exchange rate forecasting is vast. Many researchers have tested whether implications of theoretical economic models or the use of advanced econometric techniques can help explain future movements in exchange rates. The results of the empirical studies for major world currencies show that forecasts from a naive random walk tend to be comparable or even better than forecasts from more sophisticated models. In the case of the Polish zloty, the discussion in the literature on exchange rate forecasting is scarce. This article fills this gap by testing whether non-linear time series models are able to generate forecasts for the nominal exchange rate of the Polish zloty that are more accurate than forecasts from a random walk. Our results confirm the main findings from the literature, namely that it is difficult to outperform a naive random walk in exchange rate forecasting contest.
Often daily prices on different markets are not all observable. The question is whether we should exclude from modelling the days with prices not available on all markets (thus loosing some information and implicitly modifying the time axis) or somehow complete the missing (non-existing) prices. In order to compare the effects of each of two ways of dealing with partly available data, one should consider formal procedures of replacing the unavailable prices by their appropriate predictions. We propose a fully Bayesian approach, which amounts to obtaining the marginal posterior (or predictive) distribution for any particular day in question. This procedure takes into account uncertainty on missing prices and can be used to check validity of informal ways of ”completing” the data (e.g. linear interpolation). We use the MSF-SBEKK structure, the simplest among hybrid MSV-MGARCH models, which can parsimoniously describe volatility of a large number of prices or indices. In order to conduct Bayesian inference, the conditional posterior distributions for all unknown quantities are derived and the Gibbs sampler (with Metropolis-Hastings steps) is designed. Our approach is applied to daily prices from six different financial and commodity markets; the data cover the period from December 21, 2005 till September 30, 2011, so the time of the global financial crisis is included. We compare inferences (on individual parameters, conditional correlation coefficients and volatilities), obtained in the cases where incomplete observations are either deleted or forecasted.
This paper describes a forecasting exercise of close-to-open returns on major global stock indices, based on high-frequency price patterns that have become available in foreign markets overnight. Generally speaking, out-of-sample forecast performance depends on the forecast method as well as the information that the forecasts are based on. In this paper both aspects are considered. The fact that the close-to-open gap is a scalar response variable to a functional variable, namely an overnight foreign price pattern, brings the prediction exercise in the realm of functional data analysis. Both parametric and non-parametric functional data analysis are considered, and compared with a simple linear benchmark model. The information set is varied by dividing global markets into three clusters, Asia-Pacific, Europe and North-America, and including or excluding price patterns on a per-cluster basis. The overall best performing forecast is nonparametric using all available information, suggesting the presence of nonlinear relations between the overnight price patterns and the opening gaps.
The purpose of this paper is to model daily returns of the WIG20 index. The idea is to consider a model that explicitly takes changes in the amplitude of the clusters of volatility into account. This variation is modelled by a positive-valued deterministic component. A novelty in specification of the model is that the deterministic component is specified before estimating the multiplicative conditional variance component. The resulting model is subjected to misspecification tests and its forecasting performance is compared with that of commonly applied models of conditional heteroskedasticity.