961 resultados para graphics processing unit (GPU)
Resumo:
Understanding of the shape and size of different features of the human body from scanned data is necessary for automated design and evaluation of product ergonomics. In this paper, a computational framework is presented for automatic detection and recognition of important facial feature regions, from scanned head and shoulder polyhedral models. A noise tolerant methodology is proposed using discrete curvature computations, band-pass filtering, and morphological operations for isolation of the primary feature regions of the face, namely, the eyes, nose, and mouth. Spatial disposition of the critical points of these isolated feature regions is analyzed for the recognition of these critical points as the standard landmarks associated with the primary facial features. A number of clinically identified landmarks lie on the facial midline. An efficient algorithm for detection and processing of the midline, using a point sampling technique, is also presented. The results obtained using data of more than 20 subjects are verified through visualization and physical measurements. A color based and triangle skewness based schemes for isolation of geometrically nonprominent features and ear region are also presented. [DOI: 10.1115/1.3330420]
Resumo:
Objective To perform spectral analysis of noise generated by equipments and activities in a level III neonatal intensive care unit (NICU) and measure the real time sequential hourly noise levels over a 15 day period. Methods Noise generated in the NICU by individual equipments and activities were recorded with a digital spectral sound analyzer to perform spectral analysis over 0.5–8 KHz. Sequential hourly noise level measurements in all the rooms of the NICU were done for 15 days using a digital sound pressure level meter. Independent sample t test and one way ANOVA were used to examine the statistical significance of the results. The study has a 90% power to detect at least 4 dB differences from the recommended maximum of 50 dB with 95 % confidence. Results The mean noise levels in the ventilator room and stable room were 19.99 dB (A) sound pressure level (SPL) and 11.81 dB (A) SPL higher than the maximum recommended of 50 dB (A) respectively (p < 0.001). The equipments generated 19.11 dB SPL higher than the recommended norms in 1–8 KHz spectrum. The activities generated 21.49 dB SPL higher than the recommended norms in 1–8 KHz spectrum (p< 0.001). The ventilator and nebulisers produced excess noise of 8.5 dB SPL at the 0.5 KHz spectrum.Conclusion Noise level in the NICU is unacceptably high. Spectral analysis of equipment and activity noise have shown noise predominantly in the 1–8 KHz spectrum. These levels warrant immediate implementation of noise reduction protocols as a standard of care in the NICU.
Resumo:
Our attention, is focused on designing an optimal procurement mechanism which a buyer can use for procuring multiple units of a homogeneous item based on bids submitted by autonomous, rational, and intelligent suppliers. We design elegant optimal procurement mechanisms for two different situations. In the first situation, each supplier specifies the maximum quantity that can be supplied together with a per unit price. For this situation, we design an optimal mechanism S-OPT (Optimal with Simple bids). In the more generalized case, each supplier specifies discounts based on the volume of supply. In this case, we design an optimal mechanism VD-OPT (Optimal with Volume Discount, bids). The VD-OPT mechanism uses the S-OPT mechanism as a building block. The proposed mechanisms minimize the cost to the buyer, satisfying at the same time, (a) Bayesian, incentive compatibility and (b) interim individual rationality.
Resumo:
The study examines the term "low threshold" from the point of view of the most marginalized drug users. While using illicit drugs is criminalised and morally judged in Finland, users have special barriers to seek for care. Low threshold services aim at reaching drug users who themselves don t seek for help. "Low threshold" is a metaphor describing easy access to services. The theoretical frame of reference of the study consists of processing the term analytically and critically. The research work sets out to test the rhetoric of low threshold by making use of a qualitative multi-case study to find out, if the threshold of so called low threshold services always appears low for the most marginalized drug users. The cases are: the mobile unite offering health counselling, the day service centre for marginalized substance abusers and the low threshold project of the outpatient clinic for drug users in Helsinki and the health counselling service trial in Vyborg, Russia. The case study answer following questions: 1) How do the method of low threshold work out in the studied cases from the point of view of the most marginalized drug users? 2) How do potential thresholds appear and how did they develop? 3) How do the most marginalized drug users get into the care system through low threshold? The data consists of interviews of drug users, workers and other specialists having been accomplished in the years 2001 - 2006, patient documents and customer registers. The dissertation includes four articles published in the years 2006 - 2008 and the summary article. The study manifests that even low threshold is not always low enough for the most marginalized drug users. That expresses a highly multiproblematised and underpriviledged group of drug users, whose life and utilization of services are framed by deep marginalisation, homelessness, multi-substance use, mental and somatic illnesses and being repeatedly imprisoned. Using services is rendered difficult by many factors arising from the care system, drug users themselves and the action environment. In Finland thresholds are generally due to the execution of practical services and procedures not considering the fear of control and labelling as a drug user. When striving for further rehabilitating substance abuse care by means of low threshold services the marginalized drug users meet the biggest difficulties. They are due to inelastic structures, procedures and division of labour in the established care system and also to poor chances of drug users to be in action in the way expected by the care system. Multiproblematic multisubstance users become "wrong" customers by high expectations of care motivation and specializing in the care system. In Russia the thresholds are primarily caused by rigid control politics directed to drug users by the society and by the scantiness of care system. The ideology of reducing drug related harm is not approved and the care system is unwilling to commit to it. Low threshold turnes out to be relative as a term. The rhetoric of the care system is not enough to unilaterally define lowness of the threshold. The experiences of drug users and the actual activity to search for care determine the threshold. It does not appear the same for everybody either. Access of certain customer group to a service unit may even raise the threshold for some other group. The low threshold system also is surprisingly realized: you could not always tell in advance, what kind of customers and how many of them could be reached. Keywords: low threshold, marginalized drug users, harm reduction, barriers to services, outreach
Resumo:
This study takes as its premise the prominent social and cultural role that the couple relationship has acquired in modern society. Marriage as a social institution and romantic love as a cultural script have not lost their significance but during the last few decades the concept of relationship has taken prominence in our understanding of the love relationship. This change has taken place in a society governed by the therapeutic ethos. This study uses material ranging from in-depth interviews to various mass media texts to investigate the therapeutic logic that determines our understanding of the couple relationship. The central concept in this study is therapeutic relationship which does not refer to any particular type of relationship. In contemporary usage the relationship is, by definition, therapeutic. The therapeutic relationship is seen as an endless source of conflict and a highly complex dynamic unit in constant need of attention and treatment. Notwithstanding this emphasis on therapy and relationship work the therapeutic relationship lacks any morally or socially defined direction. Here lies the cultural power and according to critics the dubious aspect of the therapeutic ethos. For the therapeutic logic any reason for divorce is possible and plausible. Prosaically speaking the question is not whether to divorce or not, but when to divorce. In the end divorce only attests to the complexity of the relationship. The therapeutic understanding of the relationship gives the illusion that relationships with their tensions and conflicting emotions can be fully transferred to the sphere of transparency and therapeutic processing. This illusion created by relationship talk that emphasizes individual control is called omnipotence of the individual. However, the study shows that the individual omnipotence is inevitably limited and hence cracks appear in it. The cracks in the omnipotence show that while the therapeutic relationship based on the ideal of communication gives an individual a mode of speaking that stresses autonomy, equality and emotional gratification, it offers little help in expressing our fundamental dependence on other people. The study shows how strong an attraction the therapeutic ethos has with its grasp on the complexities of the relationship in a society where divorce is so common and the risk of divorce is collectively experienced.
Resumo:
This study considers the scheduling problem observed in the burn-in operation of semiconductor final testing, where jobs are associated with release times, due dates, processing times, sizes, and non-agreeable release times and due dates. The burn-in oven is modeled as a batch-processing machine which can process a batch of several jobs as long as the total sizes of the jobs do not exceed the machine capacity and the processing time of a batch is equal to the longest time among all the jobs in the batch. Due to the importance of on-time delivery in semiconductor manufacturing, the objective measure of this problem is to minimize total weighted tardiness. We have formulated the scheduling problem into an integer linear programming model and empirically show its computational intractability. Due to the computational intractability, we propose a few simple greedy heuristic algorithms and meta-heuristic algorithm, simulated annealing (SA). A series of computational experiments are conducted to evaluate the performance of the proposed heuristic algorithms in comparison with exact solution on various small-size problem instances and in comparison with estimated optimal solution on various real-life large size problem instances. The computational results show that the SA algorithm, with initial solution obtained using our own proposed greedy heuristic algorithm, consistently finds a robust solution in a reasonable amount of computation time.
Resumo:
This report derives from the EU funded research project “Key Factors Influencing Economic Relationships and Communication in European Food Chains” (FOODCOMM). The research consortium consisted of the following organisations: University of Bonn (UNI BONN), Department of Agricultural and Food Marketing Research (overall project co-ordination); Institute of Agricultural Development in Central and Eastern Europe (IAMO), Department for Agricultural Markets, Marketing and World Agricultural Trade, Halle (Saale), Germany; University of Helsinki, Ruralia Institute Seinäjoki Unit, Finland; Scottish Agricultural College (SAC), Food Marketing Research Team - Land Economy Research Group, Edinburgh and Aberdeen; Ashtown Food Research Centre (AFRC), Teagasc, Food Marketing Unit, Dublin; Institute of Agricultural & Food Economics (IAFE), Department of Market Analysis and Food Processing, Warsaw and Government of Aragon, Center for Agro-Food Research and Technology (CITA), Zaragoza, Spain. The aim of the FOODCOMM project was to examine the role (prevalence, necessity and significance) of economic relationships in selected European food chains and to identify the economic, social and cultural factors which influence co-ordination within these chains. The research project considered meat and cereal commodities in six different European countries (Finland, Germany, Ireland, Poland, Spain, UK/Scotland) and was commissioned against a background of changing European food markets. The research project as a whole consisted of seven different work packages. This report presents the results of qualitative research conducted for work package 5 (WP5) in the pig meat and rye bread chains in Finland. Ruralia Institute would like to give special thanks for all the individuals and companies that kindly gave up their time to take part in the study. Their input has been invaluable to the project. The contribution of research assistant Sanna-Helena Rantala was significant in the data gathering. FOODCOMM project was coordinated by the University of Bonn, Department of Agricultural and Food Market Research. Special thanks especially to Professor Monika Hartmann for acting as the project leader of FOODCOMM.
Resumo:
One of the problems encountered when photographic emulsions are used as the recording medium in holography is the appreciable time delay before the reconstruction can be viewed. This is largely due to the number of steps involved in processing and can be annoying in many applications.
Resumo:
Listening to music involves a widely distributed bilateral network of brain regions that controls many auditory perceptual, cognitive, emotional, and motor functions. Exposure to music can also temporarily improve mood, reduce stress, and enhance cognitive performance as well as promote neural plasticity. However, very little is currently known about the relationship between music perception and auditory and cognitive processes or about the potential therapeutic effects of listening to music after neural damage. This thesis explores the interplay of auditory, cognitive, and emotional factors related to music processing after a middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 MCA stroke patients were randomly assigned to a music listening group, an audio book listening group, or a control group. All patients underwent neuropsychological assessments, magnetoencephalography (MEG) measurements, and magnetic resonance imaging (MRI) scans repeatedly during a six-month post-stroke period. The results revealed that amusia, a deficit of music perception, is a common and persistent deficit after a stroke, especially if the stroke affects the frontal and temporal brain areas in the right hemisphere. Amusia is clearly associated with deficits in both auditory encoding, as indicated by the magnetic mismatch negativity (MMNm) response, and domain-general cognitive processes, such as attention, working memory, and executive functions. Furthermore, both music and audio book listening increased the MMNm, whereas only music listening improved the recovery of verbal memory and focused attention as well as prevented a depressed and confused mood during the first post-stroke months. These findings indicate a close link between musical, auditory, and cognitive processes in the brain. Importantly, they also encourage the use of listening to music as a rehabilitative leisure activity after a stroke and suggest that the auditory environment can induce long-term plastic changes in the recovering brain.
Resumo:
This paper addresses the problem of resolving ambiguities in frequently confused online Tamil character pairs by employing script specific algorithms as a post classification step. Robust structural cues and temporal information of the preprocessed character are extensively utilized in the design of these algorithms. The methods are quite robust in automatically extracting the discriminative sub-strokes of confused characters for further analysis. Experimental validation on the IWFHR Database indicates error rates of less than 3 % for the confused characters. Thus, these post processing steps have a good potential to improve the performance of online Tamil handwritten character recognition.
Resumo:
One of the biggest challenges when considering polymer nanocomposites for electrical insulation applications lies in determining their electrical properties accurately, which in turn depend on several factors, primary being dispersion of particles in the polymer matrix. With this background, this paper reports an experimental study to understand the effects of different processing techniques on the dispersion of filler particles in the polymer matrix and their related effect on the dielectric properties of the composites. Polymer composite and nanocomposite samples for the study were prepared by mixing 10% by weight of commercially available TiO2 particles of two different sizes in epoxy using different processing methods. A considerable effect of the composite processing method could be seen in the dielectric properties of nanocomposites.
Resumo:
he growth of high-performance application in computer graphics, signal processing and scientific computing is a key driver for high performance, fixed latency; pipelined floating point dividers. Solutions available in the literature use large lookup table for double precision floating point operations.In this paper, we propose a cost effective, fixed latency pipelined divider using modified Taylor-series expansion for double precision floating point operations. We reduce chip area by using a smaller lookup table. We show that the latency of the proposed divider is 49.4 times the latency of a full-adder. The proposed divider reduces chip area by about 81% than the pipelined divider in [9] which is based on modified Taylor-series.
Resumo:
In this paper, we present a growing and pruning radial basis function based no-reference (NR) image quality model for JPEG-coded images. The quality of the images are estimated without referring to their original images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity factors such as edge amplitude, edge length, background activity and background luminance. Image quality estimation involves computation of functional relationship between HVS features and subjective test scores. Here, the problem of quality estimation is transformed to a function approximation problem and solved using GAP-RBF network. GAP-RBF network uses sequential learning algorithm to approximate the functional relationship. The computational complexity and memory requirement are less in GAP-RBF algorithm compared to other batch learning algorithms. Also, the GAP-RBF algorithm finds a compact image quality model and does not require retraining when the new image samples are presented. Experimental results prove that the GAP-RBF image quality model does emulate the mean opinion score (MOS). The subjective test results of the proposed metric are compared with JPEG no-reference image quality index as well as full-reference structural similarity image quality index and it is observed to outperform both.
Resumo:
We propose two texture-based approaches, one involving Gabor filters and the other employing log-polar wavelets, for separating text from non-text elements in a document image. Both the proposed algorithms compute local energy at some information-rich points, which are marked by Harris' corner detector. The advantage of this approach is that the algorithm calculates the local energy at selected points and not throughout the image, thus saving a lot of computational time. The algorithm has been tested on a large set of scanned text pages and the results have been seen to be better than the results from the existing algorithms. Among the proposed schemes, the Gabor filter based scheme marginally outperforms the wavelet based scheme.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.