Traffic classification is an important tool for network management. It reveals the source of observed network traffic and has many potential applications e.g. in Quality of Service, network security and traffic visualization. In the last decade, traffic classification evolved quickly due to the raise of peer-to-peer traffic. Nowadays, researchers still find new methods in order to withstand the rapid changes of the Internet. In this paper, we review 13 publications on traffic classification and related topics that were published during 2009-2012. We show diversity in recent algorithms and we highlight possible directions for the future research on traffic classification: relevance of multi-level classification, importance of experimental validation, and the need for common traffic datasets.
To avoid of manipulating search engines results by web spam, anti spam system use machine learning techniques to detect spam. However, if the learning set for the system is out of date the quality of classification falls rapidly. We present the web spam recognition system that periodically refreshes the learning set to create an adequate classifier. A new classifier is trained exclusively on data collected during the last period. We have proved that such strategy is better than an incrementation of the learning set. The system solves the starting–up issues of lacks in learning set by minimisation of learning examples and utilization of external data sets. The system was tested on real data from the spam traps and common known web services: Quora, Reddit, and Stack Overflow. The test performed among ten months shows stability of the system and improvement of the results up to 60 percent at the end of the examined period.
This paper proposes a comprehensive study on machine listening for localisation of snore sound excitation. Here we investigate the effects of varied frame sizes, and overlap of the analysed audio chunk for extracting low-level descriptors. In addition, we explore the performance of each kind of feature when it is fed into varied classifier models, including support vector machines, k-nearest neighbours, linear discriminant analysis, random forests, extreme learning machines, kernel-based extreme learning machines, multilayer perceptrons, and deep neural networks. Experimental results demonstrate that, wavelet packet transform energy can outperform most other features. A deep neural network trained with subband energy ratios reaches the highest performance achieving an unweighted average recall of 72.8% from four types for snoring.
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R 2 . These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for five percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifier. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector) and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.
Affective computing studies and develops systems capable of detecting humans affects. The search for universal well-performing features for speech-based emotion recognition is ongoing. In this paper, a small set of features with support vector machines as the classifier is evaluated on Surrey Audio-Visual Expressed Emotion database, Berlin Database of Emotional Speech, Polish Emotional Speech database and Serbian emotional speech database. It is shown that a set of 87 features can offer results on-par with state-of-the-art, yielding 80.21, 88.6, 75.42 and 93.41% average emotion recognition rate, respectively. In addition, an experiment is conducted to explore the significance of gender in emotion recognition using random forests. Two models, trained on the first and second database, respectively, and four speakers were used to determine the effects. It is seen that the feature set used in this work performs well for both male and female speakers, yielding approximately 27% average emotion recognition in both models. In addition, the emotions for female speakers were recognized 18% of the time in the first model and 29% in the second. A similar effect is seen with male speakers: the first model yields 36%, the second 28% a verage emotion recognition rate. This illustrates the relationship between the constitution of training data and emotion recognition accuracy.
Land surveyors, photogrammetrists, remote sensing engineers and professionals in the Earth sciences are often faced with the task of transferring coordinates from one geodetic datum into another to serve their desired purpose. The essence is to create compatibility between data related to different geodetic reference frames for geospatial applications. Strictly speaking, conventional techniques of conformal, affine and projective transformation models are mostly used to accomplish such task. With developing countries like Ghana where there is no immediate plans to establish geocentric datum and still rely on the astro-geodetic datums as it national mapping reference surface, there is the urgent need to explore the suitability of other transformation methods. In this study, an effort has been made to explore the proficiency of the Extreme Learning Machine (ELM) as a novel alternative coordinate transformation method. The proposed ELM approach was applied to data found in the Ghana geodetic reference network. The ELM transformation result has been analysed and compared with benchmark methods of backpropagation neural network (BPNN), radial basis function neural network (RBFNN), two-dimensional (2D) affine and 2D conformal. The overall study results indicate that the ELM can produce comparable transformation results to the widely used BPNN and RBFNN, but better than the 2D affine and 2D conformal. The results produced by ELM has demonstrated it as a promising tool for coordinate transformation in Ghana.
A variety of algorithms allows gesture recognition in video sequences. Alleviating the need for interpreters is of interest to hearing impaired people, since it allows a great degree of self-sufficiency in communicating their intent to the non-sign language speakers without the need for interpreters. State-of-theart in currently used algorithms in this domain is capable of either real-time recognition of sign language in low resolution videos or non-real-time recognition in high-resolution videos. This paper proposes a novel approach to real-time recognition of fingerspelling alphabet letters of American Sign Language (ASL) in ultra-high-resolution (UHD) video sequences. The proposed approach is based on adaptive Laplacian of Gaussian (LoG) filtering with local extrema detection using Features from Accelerated Segment Test (FAST) algorithm classified by a Convolutional Neural Network (CNN). The recognition rate of our algorithm was verified on real-life data.
Similarity assessment between 3D models is an important problem in many fields including medicine, biology and industry. As there is no direct method to compare 3D geometries, different model representations (shape signatures) are developed to enable shape description, indexing and clustering. Even though some of those descriptors proved to achieve high classification precision, their application is often limited. In this work, a different approach to similarity assessment of 3D CAD models was presented. Instead of focusing on one specific shape signature, 45 easy-to-extract shape signatures were considered simultaneously. The vector of those features constituted an input for 3 machine learning algorithms: the random forest classifier, the support vector classifier and the fully connected neural network. The usefulness of the proposed approach was evaluated with a dataset consisting of over 1600 CAD models belonging to 9 separate classes. Different values of hyperparameters, as well as neural network configurations, were considered. Retrieval accuracy exceeding 99% was achieved on the test dataset.
There were two aims of the research. One was to enable more or less automatic confirmation of the known associations – either quantitative or qualitative – between technological data and selected properties of concrete materials. Even more important is the second aim – demonstration of expected possibility of automatic identification of new such relationships, not yet recognized by civil engineers. The relationships are to be obtained by methods of Artificial Intelligence, (AI), and are to be based on actual results from experiments on concrete materials. The reason of applying the AI tools is that in Civil Engineering the real data are typically non perfect, complex, fuzzy, often with missing details, which means that their analysis in a traditional way, by building empirical models, is hardly possible or at least can not be done quickly. The main idea of the proposed approach was to combine application of different AI methods in a one system, aimed at estimation, prediction, design and/or optimization of composite materials. The paradigm of the approach is that the unknown rules concerning the properties of concrete are hidden in experimental results and can be obtained from the analysis of examples. Different AI techniques like artificial neural networks, machine learning and certain techniques related to statistics were applied. The data for the analysis originated from direct observations and from reports and publications on concrete technology. Among others it has been demonstrated that by combining different AI methods it is possible to improve the quality of the data, (e.g. when encountering outliers and missing values or in clustering problems), so that the whole data processing system will be giving better prediction, (when applying ANNs), or the newly discovered rules will be more effective, (e.g. with descriptions more complete and – at the same time – possibly more consistent, in case of ML algorithms).
The genesis of both coherent structures and reactive flow control strategies is explored. Futuristic control systems that utilize mi-crosensors and microactuators together with artificial intelligence to target specific coherent structures in a transitional or turbulent flow are considered. Of possible interest to the readers of this journal is the concept of smart wings, to be briefly discussed early in the article.