917 resultados para post-processing method
Resumo:
The advent of smart TVs has reshaped the TV-consumer interaction by combining TVs with mobile-like applications and access to the Internet. However, consumers are still unable to seamlessly interact with the contents being streamed. An example of such limitation is TV shopping, in which a consumer makes a purchase of a product or item displayed in the current TV show. Currently, consumers can only stop the current show and attempt to find a similar item in the Web or an actual store. It would be more convenient if the consumer could interact with the TV to purchase interesting items. ^ Towards the realization of TV shopping, this dissertation proposes a scalable multimedia content processing framework. Two main challenges in TV shopping are addressed: the efficient detection of products in the content stream, and the retrieval of similar products given a consumer-selected product. The proposed framework consists of three components. The first component performs computational and temporal aware multimedia abstraction to select a reduced number of frames that summarize the important information in the video stream. By both reducing the number of frames and taking into account the computational cost of the subsequent detection phase, this component component allows the efficient detection of products in the stream. The second component realizes the detection phase. It executes scalable product detection using multi-cue optimization. Additional information cues are formulated into an optimization problem that allows the detection of complex products, i.e., those that do not have a rigid form and can appear in various poses. After the second component identifies products in the video stream, the consumer can select an interesting one for which similar ones must be located in a product database. To this end, the third component of the framework consists of an efficient, multi-dimensional, tree-based indexing method for multimedia databases. The proposed index mechanism serves as the backbone of the search. Moreover, it is able to efficiently bridge the semantic gap and perception subjectivity issues during the retrieval process to provide more relevant results.^
Resumo:
There is an increasing demand for DNA analysis because of the sensitivity of the method and the ability to uniquely identify and distinguish individuals with a high degree of certainty. But this demand has led to huge backlogs in evidence lockers since the current DNA extraction protocols require long processing time. The DNA analysis procedure becomes more complicated when analyzing sexual assault casework samples where the evidence contains more than one contributor. Additional processing to separate different cell types in order to simplify the final data interpretation further contributes to the existing cumbersome protocols. The goal of the present project is to develop a rapid and efficient extraction method that permits selective digestion of mixtures. ^ Selective recovery of male DNA was achieved with as little as 15 minutes lysis time upon exposure to high pressure under alkaline conditions. Pressure cycling technology (PCT) is carried out in a barocycler that has a small footprint and is semi-automated. Typically less than 10% male DNA is recovered using the standard extraction protocol for rape kits, almost seven times more male DNA was recovered from swabs using this novel method. Various parameters including instrument setting and buffer composition were optimized to achieve selective recovery of sperm DNA. Some developmental validation studies were also done to determine the efficiency of this method in processing samples exposed to various conditions that can affect the quality of the extraction and the final DNA profile. ^ Easy to use interface, minimal manual interference and the ability to achieve high yields with simple reagents in a relatively short time make this an ideal method for potential application in analyzing sexual assault samples.^
Resumo:
How children rate vegetables may be influenced by the preparation method. The primary objective of this study was for first grade students to be involved in a cooking demonstration and to taste and rate vegetables raw and cooked. First grade children of two classes (N= 52: 18 boys and 34 girls (approximately half Hispanic) that had assented and had signed parental consent participated in the study. The degree of liking a particular vegetable was recorded by the students using a hedonic scale of five commonly eaten vegetables tasted first raw (pre-demonstration) and then cooked (post-demonstration). A food habit questionnaire was filled out by parents to evaluate their mealtime practices and beliefs about their child’s eating habits. Paired sample t-tests revealed significant differences in preferences for vegetables in their raw and cooked states. Several mealtime characteristics were significantly associated with children’s vegetable preferences. Parents who reported being satisfied with how often the family eats evening meals together were more likely to report that their child eats adequate vegetables for their health (p=0.026). Parents who stated that they were satisfied with their child’s eating habits were more likely to report that their child was trying new foods (p<.001). Cooking demonstrations by nutrition professionals may be an important strategy that can be used by parents and teachers to promote vegetable intake. It is important that nutrition professionals provide guidance to encourage consumption of vegetables for parents so that they can model the behavior of healthy food consumption to their children.
Resumo:
The purpose of this study was to determine which of the two methods is more appropriate to teach pitch discrimination to Grade 6 choral students to improve sight-singing note accuracy. This study consisted of three phases: pre-testing, instruction and post-testing. During the four week study, the experimental group received training using the Kodaly method while the control group received training using the traditional method. The pre and post tests were evaluated by three trained musicians. The analysis of the data utilized an independent t-test and a paired t-test with the methods of teaching (experimental and control) as a factor. Quantitative results suggest that the experimental subjects, those receiving Kodaly instruction at post-treatment showed a significant improvement in the pitch accuracy than the control group. The specific change resulted in the Kodaly method to be more effective in producing accurate pitch in sight-singing.
Resumo:
The presence of inhibitory substances in biological forensic samples has, and continues to affect the quality of the data generated following DNA typing processes. Although the chemistries used during the procedures have been enhanced to mitigate the effects of these deleterious compounds, some challenges remain. Inhibitors can be components of the samples, the substrate where samples were deposited or chemical(s) associated to the DNA purification step. Therefore, a thorough understanding of the extraction processes and their ability to handle the various types of inhibitory substances can help define the best analytical processing for any given sample. A series of experiments were conducted to establish the inhibition tolerance of quantification and amplification kits using common inhibitory substances in order to determine if current laboratory practices are optimal for identifying potential problems associated with inhibition. DART mass spectrometry was used to determine the amount of inhibitor carryover after sample purification, its correlation to the initial inhibitor input in the sample and the overall effect in the results. Finally, a novel alternative at gathering investigative leads from samples that would otherwise be ineffective for DNA typing due to the large amounts of inhibitory substances and/or environmental degradation was tested. This included generating data associated with microbial peak signatures to identify locations of clandestine human graves. Results demonstrate that the current methods for assessing inhibition are not necessarily accurate, as samples that appear inhibited in the quantification process can yield full DNA profiles, while those that do not indicate inhibition may suffer from lowered amplification efficiency or PCR artifacts. The extraction methods tested were able to remove >90% of the inhibitors from all samples with the exception of phenol, which was present in variable amounts whenever the organic extraction approach was utilized. Although the results attained suggested that most inhibitors produce minimal effect on downstream applications, analysts should practice caution when selecting the best extraction method for particular samples, as casework DNA samples are often present in small quantities and can contain an overwhelming amount of inhibitory substances.
Resumo:
This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as "histogram binning" inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation.
Resumo:
Accurately assessing the extent of myocardial tissue injury induced by Myocardial infarction (MI) is critical to the planning and optimization of MI patient management. With this in mind, this study investigated the feasibility of using combined fluorescence and diffuse reflectance spectroscopy to characterize a myocardial infarct at the different stages of its development. An animal study was conducted using twenty male Sprague-Dawley rats with MI. In vivo fluorescence spectra at 337 nm excitation and diffuse reflectance between 400 nm and 900 nm were measured from the heart using a portable fiber-optic spectroscopic system. Spectral acquisition was performed on - (1) the normal heart region; (2) the region immediately surrounding the infarct; and (3) the infarcted region - one, two, three and four weeks into MI development. The spectral data were divided into six subgroups according to the histopathological features associated with various degrees / severities of myocardial tissue injury as well as various stages of myocardial tissue remodeling, post infarction. Various data processing and analysis techniques were employed to recognize the representative spectral features corresponding to various histopathological features associated with myocardial infarction. The identified spectral features were utilized in discriminant analysis to further evaluate their effectiveness in classifying tissue injuries induced by MI. In this study, it was observed that MI induced significant alterations (p < 0.05) in the diffuse reflectance spectra, especially between 450 nm and 600 nm, from myocardial tissue within the infarcted and surrounding regions. In addition, MI induced a significant elevation in fluorescence intensities at 400 and 460 nm from the myocardial tissue from the same regions. The extent of these spectral alterations was related to the duration of the infarction. Using the spectral features identified, an effective tissue injury classification algorithm was developed which produced a satisfactory overall classification result (87.8%). The findings of this research support the concept that optical spectroscopy represents a useful tool to non-invasively determine the in vivo pathophysiological features of a myocardial infarct and its surrounding tissue, thereby providing valuable real-time feedback to surgeons during various surgical interventions for MI.
Resumo:
Absolute abundances (concentrations) of dinoflagellate cysts are often determined through the addition of Lycopodium clavatum marker-grains as a spike to a sample before palynological processing. An inter-laboratory calibration exercise was set up in order to test the comparability of results obtained in different laboratories, each using its own preparation method. Each of the 23 laboratories received the same amount of homogenized splits of four Quaternary sediment samples. The samples originate from different localities and consisted of a variety of lithologies. Dinoflagellate cysts were extracted and counted, and relative and absolute abundances were calculated. The relative abundances proved to be fairly reproducible, notwithstanding a need for taxonomic calibration. By contrast, excessive loss of Lycopodium spores during sample preparation resulted in non-reproducibility of absolute abundances. Use of oxidation, KOH, warm acids, acetolysis, mesh sizes larger than 15 µm and long ultrasonication (> 1 min) must be avoided to determine reproducible absolute abundances. The results of this work therefore indicate that the dinoflagellate cyst worker should make a choice between using the proposed standard method which circumvents critical steps, adding Lycopodium tablets at the end of the preparation and using an alternative method.
Resumo:
A combinação da Moldagem por Injeção de pós Metálicos (Metal Injection Moulding MIM) e o Método do Retentor Espacial (Space Holder Method - SHM) é uma técnica promissora para fabricação de peças porosas de titânio com porosidade bem definida como implantes biomédicos, uma vez que permite um alto grau de automatização e redução dos custos de produção em larga escala quando comparado a técnica tradicional (SHM e usinagem a verde). Contudo a aplicação desta técnica é limitada pelo fato que há o fechamento parcial da porosidade na superfície das amostras, levando ao deterioramento da fixação do implante ao osso. E além disso, até o presente momento não foi possível atingir condições de processamento estáveis quando a quantidade de retentor espacial excede 50 vol. %. Entretanto, a literatura descreve que a melhor faixa de porosidade para implantes de titânio para coluna vertebral está entre 60 - 65 vol. %. Portanto, no presente estudo, duas abordagens foram conduzidas visando a produção de amostras altamente porosas através da combinação de MIM e SHM com o valor constante de retentor espacial de 70 vol. % e uma porosidade aberta na superfície. Na primeira abordagem, a quantidade ótima de retentor espacial foi investigada, para tal foram melhorados a homogeneização do feedstock e os parâmetros de processo com o propósito de permitir a injeção do feedstock. Na segunda abordagem, tratamento por plasma foi aplicado nas amostras antes da etapa final de sinterização. Ambas rotas resultaram na melhoria da estabilidade dimensional das amostras durante a extração térmica do ligante e sinterização, permitindo a sinterização de amostras de titânio altamente porosas sem deformação da estrutura.
Resumo:
This thesis reports on a novel method to build a 3-D model of the above-water portion of icebergs using surface imaging. The goal is to work towards the automation of iceberg surveys, allowing an Autonomous Surface Craft (ASC) to acquire shape and size information. After collecting data and images, the core software algorithm is made up of three parts: occluding contour finding, volume intersection, and parameter estimation. A software module is designed that could be used on the ASC to perform automatic and fast processing of above-water surface image data to determine iceberg shape and size measurement and determination. The resolution of the method is calculated using data from the iceberg database of the Program of Energy Research and Development (PERD). The method was investigated using data from field trials conducted through the summer of 2014 by surveying 8 icebergs during 3 expeditions. The results were analyzed to determine iceberg characteristics. Limitations of this method are addressed including its accuracy. Surface imaging system and LIDAR system are developed to profile the above-water iceberg in 2015.
Dynamic method of stiffness identification in impacting systems for percussive drilling applications
Resumo:
Peer reviewed
Resumo:
Postprint
Resumo:
Background: There have been no published studies observing what happens to children post hospital discharge and if medication discrepancies occurred between the hospital and General Practitioner (GP) interface.1 Objectives: To identify the type of discrepancies between hospital discharge prescription and the patient's medicines after their first GP prescription. Method: Over a 3 month period (March–June 2012) across two London NHS hospital sites, parents of children on long term medications aged 18 years and under, were approached and consented prior to discharge from the ward. The patients were followed up 21 days after discharge by telephone call or home visit depending on their preference. The parent was asked if they had contacted their GP for further medications during the follow up, and if not the follow up was rescheduled. The parents were interviewed to find out if there were any discrepancies that occurred post discharge by comparing the patient's hospital discharge letter and medication at follow up. All this information was captured on a data collection form. Results: Eighty-eight patients were consented and 60 patients (68%; 60/88) were followed up by telephone call 21 days post discharge. A total of 317 medications were ordered at discharge among the 60 patients. Of the 60 that were followed up, nine were lost to follow up, one died post discharge, one was excluded from the study, and 11 had not contacted the GP and were to be followed up at a later date. Of the 38 patients who were followed up, 254 medications were ordered. Of the 38 patients there were 12 (32%) patients who had discrepancies that occurred between the discharge letter and GP, 19 (50%) had no issues, and seven (18%) mentioned issues to do with post discharge that were not discrepancies. Of the 12 patients who had at least one medication discrepancy (total 34 medications, range 1–7 discrepancies per patient), six patients had GP discrepancies, four had discrepancies resulting from a hospital outpatient appointment, one related to the discharge letter order and one was a complex discrepancy. An example: a patient was discharged on amiodarone liquid 16.5 mg daily as opposed to 65 mg daily of amiodarone from the GP. Upon interview the parent used volume units to communicate dose as opposed to the actual dose itself and the strengths of liquid had changed. Conclusions: The preliminary results from the study have shown that discrepancies due to several causes occur when paediatric patients leave hospital.
Resumo:
This paper presents an image processing based detection method for detecting pitting corrosion in steel structures. High Dynamic Range (HDR) imaging has been carried out in this regard to demonstrate the effectiveness of such relatively inexpensive techniques that are of immense benefit to Non – Destructive – Tesing (NDT) community. The pitting corrosion of a steel sample in marine environment is successfully detected in this paper using the proposed methodology. It is observed, that the proposed method has a definite potential to be applied to a wider range of applications.
Resumo:
Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.