A modification of the descriptor in a human detector using Histogram of Oriented Gradients (HOG) and Support Vector Machine (SVM) is presented. The proposed modification requires inserting the values of average cell brightness resulting in the increase of the descriptor length from 3780 to 3908 values, but it is easy to compute and instantly gives ≈ 25% improvement of the miss rate at 10‒4 False Positives Per Window (FPPW). The modification has been tested on two versions of HOG-based descriptors: the classic Dalal-Triggs and the modified one, where, instead of spatial Gaussian masks for blocks, an additional central cell has been used. The proposed modification is suitable for hardware implementations of HOG-based detectors, enabling an increase of the detection accuracy or resignation from the use of some hardware-unfriendly operations, such as a spatial Gaussian mask. The results of testing its influence on the brightness changes of test images are also presented. The descriptor may be used in sensor networks equipped with hardware acceleration of image processing to detect humans in the images.
Perception takes into account the costs and benefits of possible interpretations of incoming sensory data. This should be especially pertinent for threat recognition, where minimising the costs associated with missing a real threat is of primary importance. We tested whether recognition of threats has special characteristics that adapt this process to the task it fulfils. Participants were presented with images of threats and visually matched neutral stimuli, distorted by varying levels of noise. We found threat superiority effect and liberal response bias. Moreover, increasing the level of noise degraded the recognition of the neutral images to higher extent than the threatening images. To summarise, recognising threats is special, in that it is more resistant to noise and decline in stimulus quality, suggesting that threat recognition is a fast ‘all or nothing’ process, in which threat presence is either confirmed or negated.
Keypoint detection is a basic step in many computer vision algorithms aimed at recognition of objects, automatic navigation and analysis of biomedical images. Successful implementation of higher level image analysis tasks, however, is conditioned by reliable detection of characteristic image local regions termed keypoints. A large number of keypoint detection algorithms has been proposed and verified. In this paper we discuss the most important keypoint detection algorithms. The main part of this work is devoted to description of a keypoint detection algorithm we propose that incorporates depth information computed from stereovision cameras or other depth sensing devices. It is shown that filtering out keypoints that are context dependent, e.g. located at boundaries of objects can improve the matching performance of the keypoints which is the basis for object recognition tasks. This improvement is shown quantitatively by comparing the proposed algorithm to the widely accepted SIFT keypoint detector algorithm. Our study is motivated by a development of a system aimed at aiding the visually impaired in space perception and object identification.