The term “metalearning”, which was introduced into scientific literature by J. Biggs (1985) is, broadly speaking, an awareness of one’s own learning process and exercising control over it. Metalearning, whose roots lie in the personal, early experiences of the child related to learning, and which is expressed in her or his current concepts – is considered in this article as a basic condition for the acquisition of one of the key competences of 21st century man, namely, the learning competence. Recognizing the importance of colloquial concepts of learning, as well as their uniqueness and contextuality – in the article I will present the main problems associated with learning about the vision and understanding of the personal worlds of the learning of pupils, coming at the end of early education. On the basis of analysis of the scientific literature and previous studies conducted abroad, as well as a number of my own research projects (resulting from the application of quantitative or qualitative approach), I will present questions, doubts and selected emerging difficulties in the application of both the presented research approaches.
The authors show how to strengthen the educational power of the museum. Emphasize the historical and contextual variability of the main functions performed by museum, indicate that the location of the museum in the community of the city and broaden the scope of its activities to different communities. Characterized by contemporary models of museum education, along with the arguments for taming the different models of learning both by visitors and museum’s staff. & e article presents two practices, which, in the opinion of the authors are conducive to learning in/and by the museum.
The complexity of the phenomena associated with the course of the cognitive processes that determine an efficient learning, excludes the possibility of collecting knowledge in other ways than neuronal-information. It excludes also possibilities of interpreting it, in other ways than with use of respectively formalized cognitive models. The presented paper is a kind of summary of the latest achievements in this field.
Traffic classification is an important tool for network management. It reveals the source of observed network traffic and has many potential applications e.g. in Quality of Service, network security and traffic visualization. In the last decade, traffic classification evolved quickly due to the raise of peer-to-peer traffic. Nowadays, researchers still find new methods in order to withstand the rapid changes of the Internet. In this paper, we review 13 publications on traffic classification and related topics that were published during 2009-2012. We show diversity in recent algorithms and we highlight possible directions for the future research on traffic classification: relevance of multi-level classification, importance of experimental validation, and the need for common traffic datasets.
In multi-axis motion control systems, the tracking errors of single axis load and the contour errors caused by the mismatch of dynamic characteristics between the moving axes will affect the accuracy of the motion control system. To solve this issue, a biaxial motion control strategy based on double-iterative learning and cross-coupling control is proposed. The proposed control method improves the accuracy of the motion control system by improving individual axis tracking performance and contour tracking performance. On this basis, a rapid control prototype (RCP) is designed, and the experiment is verified by the hardware and software platforms, LabVIEW and Compact RIO. The whole design shows enhancement in the precision of the motion control of the multiaxis system. The performance in individual axis tracking and contour tracking is greatly improved.
Hundred years ago education aimed mainly at memorizing as much information as possible. Such an approach lost its sense in the digital age of today since we are overwhelmed by an easily accessible ocean of true information mixed with “fake news”. Hence, the role of the teachers nowadays must be to guide and organize the learning process rather than provide knowledge. The students must no longer be passive recipients but active participants in the process of acquiring knowledge. A new approach of “phenomenon-based learning” introduced in schools in Finland, Norway and other countries agrees also with the holistic process of human cognition rather than absorbing information in a way sliced into traditional disciplines. In the future, say, fifty years from now, the role of teachers may be partly modified by the use of robots, which however could not replace creative thinking of human beings.
The article presents reflections on the intergenerational educational-research project entitled “Restoring the Memory of the City”. This project was carried out by the University of the Third Age in Toruń in partnership with the Faculty of Education of the Nicolaus Copernicus University in Toruń within the “Patriotism of Tomorrow” framework announced by the Polish History Museum and financed by the Ministry of Culture and National Heritage. This project was based on Pierre Nora’s concept of memorial sites and modern vision of patriotism. In didactic and methodological layer it was embedded within the framework of action research, thereby allowing to combine historical contents with pedagogical method of their modern transfer. The text shows the objectives and results of the project. Also, it describes its course and activities undertaken throughout its duration. Presenting the results of this project focused on the multidimensionality of related with them intergenerational process of learning
In the last few years, a great attention was paid to the deep learning Techniques used for image analysis because of their ability to use machine learning techniques to transform input data into high level presentation. For the sake of accurate diagnosis, the medical field has a steadily growing interest in such technology especially in the diagnosis of melanoma. These deep learning networks work through making coarse segmentation, conventional filters and pooling layers. However, this segmentation of the skin lesions results in image of lower resolution than the original skin image. In this paper, we present deep learning based approaches to solve the problems in skin lesion analysis using a dermoscopic image containing skin tumor. The proposed models are trained and evaluated on standard benchmark datasets from the International Skin Imaging Collaboration (ISIC) 2018 Challenge. The proposed method achieves an accuracy of 96.67% for the validation set .The experimental tests carried out on a clinical dataset show that the classification performance using deep learning-based features performs better than the state-of-the-art techniques.
The goal of this research is to find a set of acoustic parameters that are related to differences between Polish and Lithuanian language consonants. In order to identify these differences, an acoustic analysis is performed, and the phoneme sounds are described as the vectors of acoustic parameters. Parameters known from the speech domain as well as those from the music information retrieval area are employed. These parameters are time- and frequency-domain descriptors. English language as an auxiliary language is used in the experiments. In the first part of the experiments, an analysis of Lithuanian and Polish language samples is carried out, features are extracted, and the most discriminating ones are determined. In the second part of the experiments, automatic classification of Lithuanian/English, Polish/English, and Lithuanian/Polish phonemes is performed.
This paper presents the improved version of the classification system for supporting glaucoma diagnosis in ophthalmology. In this paper we propose the new segmentation step based on the support vector clustering algorithm which enables better classification performance.
In recent years, deep learning and especially deep neural networks (DNN) have obtained amazing performance on a variety of problems, in particular in classification or pattern recognition. Among many kinds of DNNs, the convolutional neural networks (CNN) are most commonly used. However, due to their complexity, there are many problems related but not limited to optimizing network parameters, avoiding overfitting and ensuring good generalization abilities. Therefore, a number of methods have been proposed by the researchers to deal with these problems. In this paper, we present the results of applying different, recently developed methods to improve deep neural network training and operating. We decided to focus on the most popular CNN structures, namely on VGG based neural networks: VGG16, VGG11 and proposed by us VGG8. The tests were conducted on a real and very important problem of skin cancer detection. A publicly available dataset of skin lesions was used as a benchmark. We analyzed the influence of applying: dropout, batch normalization, model ensembling, and transfer learning. Moreover, the influence of the type of activation function was checked. In order to increase the objectivity of the results, each of the tested models was trained 6 times and their results were averaged. In addition, in order to mitigate the impact of the selection of learning, test and validation sets, k-fold validation was applied.
In this article I present the main assumptions and discuss issues of pedagogy as a science and the field of education during a special meeting of the Committee of the Academy of Pedagogical Sciences at Adam Mickiewicz University in Poznan. I focus on the institutional leaders in science teaching who are rectors and deans of Faculties of Education in Poland. Moreover, they are co-authors of relevant teaching and research solutions in science teaching. In the age of growing crisis in the academic community we can, as educators, discuss how no to be to be surprised by pathogenic processes and events, but how to be able to counteract them. Furthermore, how to show representatives of other academic disciplines and structures of learning, how to deal with common to us problems.
This paper proposes a comprehensive study on machine listening for localisation of snore sound excitation. Here we investigate the effects of varied frame sizes, and overlap of the analysed audio chunk for extracting low-level descriptors. In addition, we explore the performance of each kind of feature when it is fed into varied classifier models, including support vector machines, k-nearest neighbours, linear discriminant analysis, random forests, extreme learning machines, kernel-based extreme learning machines, multilayer perceptrons, and deep neural networks. Experimental results demonstrate that, wavelet packet transform energy can outperform most other features. A deep neural network trained with subband energy ratios reaches the highest performance achieving an unweighted average recall of 72.8% from four types for snoring.
Nowadays, the Internet connects people, multimedia and physical objects leading to a new-wave of services. This includes learning applications, which require to manage huge and mixed volumes of information coming from Web and social media, smart-cities and Internet of Things nodes. Unfortunately, designing smart e-learning systems able to take advantage of such a complex technological space raises different challenges. In this perspective, this paper introduces a reference architecture for the development of future and big-data-capable e-learning platforms. Also, it showcases how data can be used to enrich the learning process.
We evaluated the performance of nine machine learning regression algorithms and their ensembles for sub-pixel estimation of impervious areas coverages from Landsat imagery. The accuracy of imperviousness mapping in individual time points was assessed based on RMSE, MAE and R 2 . These measures were also used for the assessment of imperviousness change intensity estimations. The applicability for detection of relevant changes in impervious areas coverages at sub-pixel level was evaluated using overall accuracy, F-measure and ROC Area Under Curve. The results proved that Cubist algorithm may be advised for Landsat-based mapping of imperviousness for single dates. Stochastic gradient boosting of regression trees (GBM) may be also considered for this purpose. However, Random Forest algorithm is endorsed for both imperviousness change detection and mapping of its intensity. In all applications the heterogeneous model ensembles performed at least as well as the best individual models or better. They may be recommended for improving the quality of sub-pixel imperviousness and imperviousness change mapping. The study revealed also limitations of the investigated methodology for detection of subtle changes of imperviousness inside the pixel. None of the tested approaches was able to reliably classify changed and non-changed pixels if the relevant change threshold was set as one or three percent. Also for five percent change threshold most of algorithms did not ensure that the accuracy of change map is higher than the accuracy of random classifier. For the threshold of relevant change set as ten percent all approaches performed satisfactory.
In this work nine non-linear regression models were compared for sub-pixel impervious surface area mapping from Landsat images. The comparison was done in three study areas both for accuracy of imperviousness coverage evaluation in individual points in time and accuracy of imperviousness change assessment. The performance of individual machine learning algorithms (Cubist, Random Forest, stochastic gradient boosting of regression trees, k-nearest neighbors regression, random k-nearest neighbors regression, Multivariate Adaptive Regression Splines, averaged neural networks, and support vector machines with polynomial and radial kernels) was also compared with the performance of heterogeneous model ensembles constructed from the best models trained using particular techniques. The results proved that in case of sub-pixel evaluation the most accurate prediction of change may not necessarily be based on the most accurate individual assessments. When single methods are considered, based on obtained results Cubist algorithm may be advised for Landsat based mapping of imperviousness for single dates. However, Random Forest may be endorsed when the most reliable evaluation of imperviousness change is the primary goal. It gave lower accuracies for individual assessments, but better prediction of change due to more correlated errors of individual predictions. Heterogeneous model ensembles performed for individual time points assessments at least as well as the best individual models. In case of imperviousness change assessment the ensembles always outperformed single model approaches. It means that it is possible to improve the accuracy of sub-pixel imperviousness change assessment using ensembles of heterogeneous non-linear regression models.
To gather reproducible measurement results, metrologists need a variety of competences. Yet, also other groups of staff in a manufacturing enterprise need competences in metrology in order to assure the appropriate specification of tolerances or sufficient consideration of inspectional requirements in production processes. Therefore, the strict focus of metrological qualification on staff preparing or performing the actual measurements is insufficient for the efficient assurance of conformity. Additionally, on the one hand a demand-oriented qualification concept is needed to impart required fundamental knowledge on manufacturing metrology according to the specific needs of each user group. On the other hand, appropriate measures of knowledge management have to be applied in order to assure a proper application of the gathered knowledge and enhance mutual understanding for the requirements of other involved user groups. Thus, as amendment for user-specific measures of formal qualification, a concept has been developed to enable knowledge transfer among different groups and departments in an enterprise. By this holistic approach, the impact of measures of qualification can be increased and high product quality can be achieved as a common aim of all related groups of staff.
This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector) and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.
Information Technologies (IT) are most and most important factor in economical and social development of particular countries and of the whole world, therefore we often think and told about so called Information Society (IS) as a new form of socio-economical organization of the society. Most properties of IT are profitable for the people and most features of IS are positive. Nevertheless we can find also some problems arising because of too fast development of IT and some dangers connected with increasing dependability of present society on IT devices and services. In the paper selected problems connected with distance teaching and distance learning (so called elearning) are pointed out and considered. As a most important problem so called "information smog" is pointed. It is very troublesome at present and may be source of big problem in the future.