888 resultados para Automatic call detector
Resumo:
The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).
Resumo:
A long-standing debate in the literature is whether attention can form two or more independent spatial foci in addition to the well-known unique spatial focus. There is evidence that voluntary visual attention divides in space. The possibility that this also occurs for automatic visual attention was investigated here. Thirty-six female volunteers were tested. In each trial, a prime stimulus was presented in the left or right visual hemifield. This stimulus was characterized by the blinking of a superior, middle or inferior ring, the blinking of all these rings, or the blinking of the superior and inferior rings. A target stimulus to which the volunteer should respond with the same side hand or a target stimulus to which she should not respond was presented 100 ms later in a primed location, a location between two primed locations or a location in the contralateral hemifield. Reaction time to the positive target stimulus in a primed location was consistently shorter than reaction time in the horizontally corresponding contralateral location. This attentional effect was significantly smaller or absent when the positive target stimulus appeared in the middle location after the double prime stimulus. These results suggest that automatic visual attention can focus on two separate locations simultaneously, to some extent sparing the region in between.
Resumo:
The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.
Resumo:
In vivo proton magnetic resonance spectroscopy (¹H-MRS) is a technique capable of assessing biochemical content and pathways in normal and pathological tissue. In the brain, ¹H-MRS complements the information given by magnetic resonance images. The main goal of the present study was to assess the accuracy of ¹H-MRS for the classification of brain tumors in a pilot study comparing results obtained by manual and semi-automatic quantification of metabolites. In vivo single-voxel ¹H-MRS was performed in 24 control subjects and 26 patients with brain neoplasms that included meningiomas, high-grade neuroglial tumors and pilocytic astrocytomas. Seven metabolite groups (lactate, lipids, N-acetyl-aspartate, glutamate and glutamine group, total creatine, total choline, myo-inositol) were evaluated in all spectra by two methods: a manual one consisting of integration of manually defined peak areas, and the advanced method for accurate, robust and efficient spectral fitting (AMARES), a semi-automatic quantification method implemented in the jMRUI software. Statistical methods included discriminant analysis and the leave-one-out cross-validation method. Both manual and semi-automatic analyses detected differences in metabolite content between tumor groups and controls (P < 0.005). The classification accuracy obtained with the manual method was 75% for high-grade neuroglial tumors, 55% for meningiomas and 56% for pilocytic astrocytomas, while for the semi-automatic method it was 78, 70, and 98%, respectively. Both methods classified all control subjects correctly. The study demonstrated that ¹H-MRS accurately differentiated normal from tumoral brain tissue and confirmed the superiority of the semi-automatic quantification method.
Resumo:
Previous assessment of verticality by means of rod and rod and frame tests indicated that human subjects can be more (field dependent) or less (field independent) influenced by a frame placed around a tilted rod. In the present study we propose a new approach to these tests. The judgment of visual verticality (rod test) was evaluated in 50 young subjects (28 males, ranging in age from 20 to 27 years) by randomly projecting a luminous rod tilted between -18 and +18° (negative values indicating left tilts) onto a tangent screen. In the rod and frame test the rod was displayed within a luminous fixed frame tilted at +18 or -18°. Subjects were instructed to verbally indicate the rod’s inclination direction (forced choice). Visual dependency was estimated by means of a Visual Index calculated from rod and rod and frame test values. Based on this index, volunteers were classified as field dependent, intermediate and field independent. A fourth category was created within the field-independent subjects for whom the amount of correct guesses in the rod and frame test exceeded that of the rod test, thus indicating improved performance when a surrounding frame was present. In conclusion, the combined use of subjective visual vertical and the rod and frame test provides a specific and reliable form of evaluation of verticality in healthy subjects and might be of use to probe changes in brain function after central or peripheral lesions.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Supplier provided automatic warehouse replenishment solutions in pharmaceutical diagnostics industry
Resumo:
In this work, bromelain was recovered from ground pineapple stem and rind by means of precipitation with alcohol at low temperature. Bromelain is the name of a group of powerful protein-digesting, or proteolytic, enzymes that are particularly useful for reducing muscle and tissue inflammation and as a digestive aid. Temperature control is crucial to avoid irreversible protein denaturation and consequently to improve the quality of the enzyme recovered. The process was carried out alternatively in two fed-batch pilot tanks: a glass tank and a stainless steel tank. Aliquots containing 100 mL of pineapple aqueous extract were fed into the tank. Inside the jacketed tank, the protein was exposed to unsteady operating conditions during the addition of the precipitating agent (ethanol 99.5%) because the dilution ratio "aqueous extract to ethanol" and heat transfer area changed. The coolant flow rate was manipulated through a variable speed pump. Fine tuned conventional and adaptive PID controllers were on-line implemented using a fieldbus digital control system. The processing performance efficiency was enhanced and so was the quality (enzyme activity) of the product.
Resumo:
The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc), water activity (Aw), and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg) at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.
Resumo:
The electricity distribution sector will face significant changes in the future. Increasing reliability demands will call for major network investments. At the same time, electricity end-use is undergoing profound changes. The changes include future energy technologies and other advances in the field. New technologies such as microgeneration and electric vehicles will have different kinds of impacts on electricity distribution network loads. In addition, smart metering provides more accurate electricity consumption data and opportunities to develop sophisticated load modelling and forecasting approaches. Thus, there are both demands and opportunities to develop a new type of long-term forecasting methodology for electricity distribution. The work concentrates on the technical and economic perspectives of electricity distribution. The doctoral dissertation proposes a methodology to forecast electricity consumption in the distribution networks. The forecasting process consists of a spatial analysis, clustering, end-use modelling, scenarios and simulation methods, and the load forecasts are based on the application of automatic meter reading (AMR) data. The developed long-term forecasting process produces power-based load forecasts. By applying these results, it is possible to forecast the impacts of changes on electrical energy in the network, and further, on the distribution system operator’s revenue. These results are applicable to distribution network and business planning. This doctoral dissertation includes a case study, which tests the forecasting process in practice. For the case study, the most prominent future energy technologies are chosen, and their impacts on the electrical energy and power on the network are analysed. The most relevant topics related to changes in the operating environment, namely energy efficiency, microgeneration, electric vehicles, energy storages and demand response, are discussed in more detail. The study shows that changes in electricity end-use may have radical impacts both on electrical energy and power in the distribution networks and on the distribution revenue. These changes will probably pose challenges for distribution system operators. The study suggests solutions for the distribution system operators on how they can prepare for the changing conditions. It is concluded that a new type of load forecasting methodology is needed, because the previous methods are no longer able to produce adequate forecasts.
Resumo:
Epilepsy is a chronic brain disorder, characterized by reoccurring seizures. Automatic sei-zure detector, incorporated into a mobile closed-loop system, can improve the quality of life for the people with epilepsy. Commercial EEG headbands, such as Emotiv Epoc, have a potential to be used as the data acquisition devices for such a system. In order to estimate that potential, epileptic EEG signals from the commercial devices were emulated in this work based on the EEG data from a clinical dataset. The emulated characteristics include the referencing scheme, the set of electrodes used, the sampling rate, the sample resolution and the noise level. Performance of the existing algorithm for detection of epileptic seizures, developed in the context of clinical data, has been evaluated on the emulated commercial data. The results show, that after the transformation of the data towards the characteristics of Emotiv Epoc, the detection capabilities of the algorithm are mostly preserved. The ranges of acceptable changes in the signal parameters are also estimated.
Resumo:
The Beckman Helium Discharge Detector has been found to be sensitive to the fixed gases oxygen, nitrogen, and hydrogen at detection levels 10-100 times more sensitive than possible with a Bow-Mac Thermal Conductivity Detector. Detection levels o~ approximately 1.9 E-4 ~ v/v oxygen, 3.1 E-4 ~ v/v nitrogen, and 3.0 E-3 ~ v/v hydrogen are estimated. Response of the Helium Discharge Detector was not linear, but is useable for quantitation over limited ranges of concentration using suitably prepared working standards. Cleanliness of the detector discharge electrodes and purity of the helium carrier and discharge gas were found to be critical to the operation of the detector. Higher sensitivities of the Helium Discharge Detector may be possible by the design and installation of a sensitive, solid-state electrometer.
Resumo:
Three dimensional model design is a well-known and studied field, with numerous real-world applications. However, the manual construction of these models can often be time-consuming to the average user, despite the advantages o ffered through computational advances. This thesis presents an approach to the design of 3D structures using evolutionary computation and L-systems, which involves the automated production of such designs using a strict set of fitness functions. These functions focus on the geometric properties of the models produced, as well as their quantifiable aesthetic value - a topic which has not been widely investigated with respect to 3D models. New extensions to existing aesthetic measures are discussed and implemented in the presented system in order to produce designs which are visually pleasing. The system itself facilitates the construction of models requiring minimal user initialization and no user-based feedback throughout the evolutionary cycle. The genetic programming evolved models are shown to satisfy multiple criteria, conveying a relationship between their assigned aesthetic value and their perceived aesthetic value. Exploration into the applicability and e ffectiveness of a multi-objective approach to the problem is also presented, with a focus on both performance and visual results. Although subjective, these results o er insight into future applications and study in the fi eld of computational aesthetics and automated structure design.