858 resultados para data pre-processing
Resumo:
NeEstimator v2 is a completely revised and updated implementation of software that produces estimates of contemporary effective population size, using several different methods and a single input file. NeEstimator v2 includes three single-sample estimators (updated versions of the linkage disequilibrium and heterozygote-excess methods, and a new method based on molecular coancestry), as well as the two-sample (moment-based temporal) method. New features include the following: (i) an improved method for accounting for missing data; (ii) options for screening out rare alleles; (iii) confidence intervals for all methods; (iv) the ability to analyse data sets with large numbers of genetic markers (10000 or more); (v) options for batch processing large numbers of different data sets, which will facilitate cross-method comparisons using simulated data; and (vi) correction for temporal estimates when individuals sampled are not removed from the population (Plan I sampling). The user is given considerable control over input data and composition, and format of output files. The freely available software has a new JAVA interface and runs under MacOS, Linux and Windows.
Resumo:
This chapter provides updated information on avocado fruit quality parameters, sensory perception and maturity, production and postharvest factors affecting quality defects, disinfestation and storage (including pre-conditioning), predicting outturn quality and processing.
Resumo:
The Northern Demersal Scalefish Fishery has historically comprised a small fleet (≤10 vessels year−1) operating over a relatively large area off the northwest coast of Australia. This multispecies fishery primarily harvests two species of snapper: goldband snapper, Pristipomoides multidens and red emperor, Lutjanus sebae. A key input to age-structured assessments of these stocks has been the annual time-series of the catch rate. We used an approach that combined Generalized Linear Models, spatio-temporal imputation, and computer-intensive methods to standardize the fishery catch rates and report uncertainty in the indices. These analyses, which represent one of the first attempts to standardize fish trap catch rates, were also augmented to gain additional insights into the effects of targeting, historical effort creep, and spatio-temporal resolution of catch and effort data on trap fishery dynamics. Results from monthly reported catches (i.e. 1993 on) were compared with those reported daily from more recently (i.e. 2008 on) enhanced catch and effort logbooks. Model effects of catches of one species on the catch rates of another became more conspicuous when the daily data were analysed and produced estimates with greater precision. The rate of putative effort creep estimated for standardized catch rates was much lower than estimated for nominal catch rates. These results therefore demonstrate how important additional insights into fishery and fish population dynamics can be elucidated from such “pre-assessment” analyses.
Resumo:
Campylobacter is an important food borne pathogen, mainly associated with poultry. A lack of through-chain quantitative Campylobacter data has been highlighted within quantitative risk assessments. The aim of this study was to quantitatively and qualitatively measure Campylobacter and Escherichia coli concentration on chicken carcasses through poultry slaughter. Chickens (n = 240) were sampled from each of four flocks along the processing chain, before scald, after scald, before chill, after chill, after packaging and from individual caeca. The overall prevalence of Campylobacter after packaging was 83% with a median concentration of 0.8 log10 CFU/mL. The processing points of scalding and chilling had significant mean reductions of both Campylobacter (1.8 and 2.9 log10 CFU/carcase) and E. coli (1.3 and 2.5 log10 CFU/carcase). The concentration of E. coli and Campylobacter was significantly correlated throughout processing indicating that E. coli may be a useful indicator organism for reductions in Campylobacter concentration. The carriage of species varied between flocks, with two flocks dominated by Campylobacter coli and two flocks dominated by Campylobacter jejuni. Current processing practices can lead to significant reductions in the concentration of Campylobacter on carcasses. Further understanding of the variable effect of processing on Campylobacter and the survival of specific genotypes may enable more targeted interventions to reduce the concentration of this poultry associated pathogen.
Resumo:
The Taita Hills in southeastern Kenya form the northernmost part of Africa’s Eastern Arc Mountains, which have been identified by Conservation International as one of the top ten biodiversity hotspots on Earth. As with many areas of the developing world, over recent decades the Taita Hills have experienced significant population growth leading to associated major changes in land use and land cover (LULC), as well as escalating land degradation, particularly soil erosion. Multi-temporal medium resolution multispectral optical satellite data, such as imagery from the SPOT HRV, HRVIR, and HRG sensors, provides a valuable source of information for environmental monitoring and modelling at a landscape level at local and regional scales. However, utilization of multi-temporal SPOT data in quantitative remote sensing studies requires the removal of atmospheric effects and the derivation of surface reflectance factor. Furthermore, for areas of rugged terrain, such as the Taita Hills, topographic correction is necessary to derive comparable reflectance throughout a SPOT scene. Reliable monitoring of LULC change over time and modelling of land degradation and human population distribution and abundance are of crucial importance to sustainable development, natural resource management, biodiversity conservation, and understanding and mitigating climate change and its impacts. The main purpose of this thesis was to develop and validate enhanced processing of SPOT satellite imagery for use in environmental monitoring and modelling at a landscape level, in regions of the developing world with limited ancillary data availability. The Taita Hills formed the application study site, whilst the Helsinki metropolitan region was used as a control site for validation and assessment of the applied atmospheric correction techniques, where multiangular reflectance field measurements were taken and where horizontal visibility meteorological data concurrent with image acquisition were available. The proposed historical empirical line method (HELM) for absolute atmospheric correction was found to be the only applied technique that could derive surface reflectance factor within an RMSE of < 0.02 ps in the SPOT visible and near-infrared bands; an accuracy level identified as a benchmark for successful atmospheric correction. A multi-scale segmentation/object relationship modelling (MSS/ORM) approach was applied to map LULC in the Taita Hills from the multi-temporal SPOT imagery. This object-based procedure was shown to derive significant improvements over a uni-scale maximum-likelihood technique. The derived LULC data was used in combination with low cost GIS geospatial layers describing elevation, rainfall and soil type, to model degradation in the Taita Hills in the form of potential soil loss, utilizing the simple universal soil loss equation (USLE). Furthermore, human population distribution and abundance were modelled with satisfactory results using only SPOT and GIS derived data and non-Gaussian predictive modelling techniques. The SPOT derived LULC data was found to be unnecessary as a predictor because the first and second order image texture measurements had greater power to explain variation in dwelling unit occurrence and abundance. The ability of the procedures to be implemented locally in the developing world using low-cost or freely available data and software was considered. The techniques discussed in this thesis are considered equally applicable to other medium- and high-resolution optical satellite imagery, as well the utilized SPOT data.
Resumo:
What can the statistical structure of natural images teach us about the human brain? Even though the visual cortex is one of the most studied parts of the brain, surprisingly little is known about how exactly images are processed to leave us with a coherent percept of the world around us, so we can recognize a friend or drive on a crowded street without any effort. By constructing probabilistic models of natural images, the goal of this thesis is to understand the structure of the stimulus that is the raison d etre for the visual system. Following the hypothesis that the optimal processing has to be matched to the structure of that stimulus, we attempt to derive computational principles, features that the visual system should compute, and properties that cells in the visual system should have. Starting from machine learning techniques such as principal component analysis and independent component analysis we construct a variety of sta- tistical models to discover structure in natural images that can be linked to receptive field properties of neurons in primary visual cortex such as simple and complex cells. We show that by representing images with phase invariant, complex cell-like units, a better statistical description of the vi- sual environment is obtained than with linear simple cell units, and that complex cell pooling can be learned by estimating both layers of a two-layer model of natural images. We investigate how a simplified model of the processing in the retina, where adaptation and contrast normalization take place, is connected to the nat- ural stimulus statistics. Analyzing the effect that retinal gain control has on later cortical processing, we propose a novel method to perform gain control in a data-driven way. Finally we show how models like those pre- sented here can be extended to capture whole visual scenes rather than just small image patches. By using a Markov random field approach we can model images of arbitrary size, while still being able to estimate the model parameters from the data.
Resumo:
The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.
Resumo:
The usual task in music information retrieval (MIR) is to find occurrences of a monophonic query pattern within a music database, which can contain both monophonic and polyphonic content. The so-called query-by-humming systems are a famous instance of content-based MIR. In such a system, the user's hummed query is converted into symbolic form to perform search operations in a similarly encoded database. The symbolic representation (e.g., textual, MIDI or vector data) is typically a quantized and simplified version of the sampled audio data, yielding to faster search algorithms and space requirements that can be met in real-life situations. In this thesis, we investigate geometric approaches to MIR. We first study some musicological properties often needed in MIR algorithms, and then give a literature review on traditional (e.g., string-matching-based) MIR algorithms and novel techniques based on geometry. We also introduce some concepts from digital image processing, namely the mathematical morphology, which we will use to develop and implement four algorithms for geometric music retrieval. The symbolic representation in the case of our algorithms is a binary 2-D image. We use various morphological pre- and post-processing operations on the query and the database images to perform template matching / pattern recognition for the images. The algorithms are basically extensions to classic image correlation and hit-or-miss transformation techniques used widely in template matching applications. They aim to be a future extension to the retrieval engine of C-BRAHMS, which is a research project of the Department of Computer Science at University of Helsinki.
Resumo:
This paper reports on the outcomes of a two year ALTC Competitive Research and Development Project that aimed to "Develop Strategies at the Pre-Service Level to Address Critical Teacher Attraction and Retention Issues in Australian Rural, Regional and Remote Schools". As well as developing a ‘training framework’ and teaching guides to increase the capacity and credibility of four universities to prepare educators who might venture out of the metropolitan area to teach, data were gathered from pre-service and graduate teachers to analyse regional resilience. It was found that there was a strong likelihood to participate in a regional practicum and stay in a non-metropolitan community once they graduated from university if they had a positive attitude to regional Western Australia either through a family connection or previous experience. Recommendations from this study emphasise the importance of having pre-service students participate in positive regional experiences early in their university study.
Resumo:
The quality of an online university degree is paramount to the student, the reputation of the university and most importantly, the profession that will be entered. At the School of Education within Curtin University, we aim to ensure that students within rural and remote areas are provided with high quality degrees equal to their city counterparts who access face-to-face classes on campus.In 2010, the School of Education moved to flexible delivery of a fully online Bachelor of Education degree for their rural students. In previous years, the degree had been delivered in physical locations around the state. Although this served the purpose for the time, it restricted the degree to only those rural students who were able to access the physical campus. The new model in 2010 allows access for students in any rural area who have a computer and an internet connection, regardless of their geographical location. As a result enrolments have seen a positive increase in new students. Academic staff had previously used an asynchronous environment to deliver learning modules housed within a learning management system (LMS). To enhance the learning environment and to provide high quality learning experiences to students learning at a distance, the adoption of synchronous software was introduced. This software is a real-time virtual classroom environment that allows for communication through Voice over Internet Protocol (VoIP) and videoconferencing, along with a large number of collaboration tools to engage learners. This research paper reports on the professional development of academic staff to integrate a live e-learning solution into their current LMS environment. It involved professional development, including technical orientation for teaching staff and course participants simultaneously. Further, pedagogical innovations were offered to engage the students in a collaborative learning environment. Data were collected from academic staff through semi-structured interviews and participant observation. The findings discuss the perceived value of the technology, problems encountered and solutions sought.
Resumo:
Replication and transcription of the RNA genome of alphaviruses relies on a set of virus-encoded nonstructural proteins. They are synthesized as a long polyprotein precursor, P1234, which is cleaved at three processing sites to yield nonstructural proteins nsP1, nsP2, nsP3 and nsP4. All the four proteins function as constitutive components of the membrane-associated viral replicase. Proteolytic processing of P1234 polyprotein is precisely orchestrated and coordinates the replicase assembly and maturation. The specificity of the replicase is also controlled by proteolytic cleavages. The early replicase is composed of P123 polyprotein intermediate and nsP4. It copies the positive sense RNA genome to complementary minus-strand. Production of new plus-strands requires complete processing of the replicase. The papain-like protease residing in nsP2 is responsible for all three cleavages in P1234. This study addressed the mechanisms of proteolytic processing of the replicase polyprotein in two alphaviruses Semliki Forest virus (SFV) and Sindbis virus (SIN) representing different branches of the genus. The survey highlighted the functional relation of the alphavirus nsP2 protease to the papain-like enzymes. A new structural motif the Cys-His catalytic dyad accompanied with an aromatic residue following the catalytic His was described for nsP2 and a subset of other thiol proteases. Such an architecture of the catalytic center was named the glycine specificity motif since it was implicated in recognition of a specific Gly residue in the substrate. In particular, the presence of the motif in nsP2 makes the appearance of this amino acid at the second position upstream of the scissile bond a necessary condition for the cleavage. On top of that, there were four distinct mechanisms identified, which provide affinity for the protease and specifically direct the enzyme to different sites in the P1234 polyprotein. Three factors RNA, the central domain of nsP3 and the N-terminus of nsP2 were demonstrated to be external modulators of the nsP2 protease. Here I suggest that the basal nsP2 protease specificity is inherited from the ancestral papain-like enzyme and employs the recognition of the upstream amino acid signature in the immediate vicinity of the scissile bond. This mechanism is responsible for the efficient processing of the SFV nsP3/nsP4 junction. I propose that the same mechanism is involved in the cleavage of the nsP1/nsP2 junction of both viruses as well. However, in this case it rather serves to position the substrate, whereas the efficiency of the processing is ensured by the capability of nsP2 to cut its own N-terminus in cis. Both types of cleavages are demonstrated here to be inhibited by RNA, which is interpreted as impairing the basal papain-like recognition of the substrate. In contrast, processing of the SIN nsP3/nsP4 junction was found to be activated by RNA and additionally potentiated by the presence of the central region of nsP3 in the protease. The processing of the nsP2/nsP3 junction in both viruses occurred via another mechanism, requiring the exactly processed N-terminus of nsP2 in the protease and insensitive to RNA addition. Therefore, the three processing events in the replicase polyprotein maturation are performed via three distinct mechanisms in each of two studied alphaviruses. Distinct sets of conditions required for each cleavage ensure sequential maturation of P1234 polyprotein: nsP4 is released first, then the nsP1/nsP2 site is cut in cis, and liberation of the nsP2 N-terminus activates the cleavage of the nsP2/nsP3 junction at last. The first processing event occurs differently in SFV and SIN, whereas the subsequent cleavages are found to be similar in the two viruses and therefore, their mechanisms are suggested to be conserved in the genus. The RNA modulation of the alphavirus nonstructural protease activity, discovered here, implies bidirectional functional interplay between the alphavirus RNA metabolism and protease regulation. The nsP2 protease emerges as a signal transmitting moiety, which senses the replication stage and responds with proteolytic cleavages. A detailed hypothetical model of the alphavirus replicase core was inferred from the data obtained in the study. Similar principles in replicase organization and protease functioning are expected to be employed by other RNA viruses.
Resumo:
Undergraduate Medical Imaging (MI)students at QUT attend their first clinical placement towards the end of semester two. Students undertake two (pre)clinical skills development units – one theory and one practical. Students gain good contextual and theoretical knowledge during these units via a blended learning model with multiple learning methods employed. Students attend theory lectures, practical sessions, tutorial sessions in both a simulated and virtual environment and also attend pre-clinical scenario based tutorial sessions. The aim of this project is to evaluate the use of blended learning in the context of 1st year Medical Imaging Radiographic Technique and its effectiveness in preparing students for their first clinical experience. It is hoped that the multiple teaching methods employed within the pre-clinical training unit at QUT builds students clinical skills prior to the real situation. A quantitative approach will be taken, evaluating via pre and post clinical placement surveys. This data will be correlated with data gained in the previous year on the effectiveness of this training approach prior to clinical placement. In 2014 59 students were surveyed prior to their clinical placement demonstrated positive benefits of using a variety of learning tools to enhance their learning. 98.31%(n=58)of students agreed or strongly agreed that the theory lectures were a useful tool to enhance their learning. This was followed closely by 97% (n=57) of the students realising the value of performing role-play simulation prior to clinical placement. Tutorial engagement was considered useful for 93.22% (n=55) whilst 88.14% (n=52) reasoned that the x-raying of phantoms in the simulated radiographic laboratory was beneficial. Self-directed learning yielded 86.44% (n=51). The virtual reality simulation software was valuable for 72.41% (n=42) of the students. Of the 4 students that disagreed or strongly disagreed with the usefulness of any tool they strongly agreed to the usefulness of a minimum of one other learning tool. The impact of the blended learning model to meet diverse student needs continues to be positive with students engaging in most offerings. Students largely prefer pre -clinical scenario based practical and tutorial sessions where 'real-world’ situations are discussed.
Resumo:
Tactile sensation plays an important role in everyday life. While the somatosensory system has been studied extensively, the majority of information has come from studies using animal models. Recent development of high-resolution anatomical and functional imaging techniques has enabled the non-invasive study of human somatosensory cortex and thalamus. This thesis provides new insights into the functional organization of the human brain areas involved in tactile processing using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). The thesis also demonstrates certain optimizations of MEG and fMRI methods. Tactile digit stimulation elicited stimulus-specific responses in a number of brain areas. Contralateral activation was observed in somatosensory thalamus (Study II), primary somatosensory cortex (SI; I, III, IV), and post-auditory belt area (III). Bilateral activation was observed in secondary somatosensory cortex (SII; II, III, IV). Ipsilateral activation was found in the post-central gyrus (area 2 of SI cortex; IV). In addition, phasic deactivation was observed within ipsilateral SI cortex and bilateral primary motor cortex (IV). Detailed investigation of the tactile responses demonstrated that the arrangement of distal-proximal finger representations in area 3b of SI in humans is similar to that found in monkeys (I). An optimized MEG approach was sufficient to resolve such fine detail in functional organization. The SII region appeared to contain double representations for fingers and toes (II). The detection of activations in the SII region and thalamus improved at the individual and group levels when cardiac-gated fMRI was used (II). Better detection of body part representations at the individual level is an important improvement, because identification of individual representations is crucial for studying brain plasticity in somatosensory areas. The posterior auditory belt area demonstrated responses to both auditory and tactile stimuli (III), implicating this area as a physiological substrate for the auditory-tactile interaction observed in earlier psychophysical studies. Comparison of different smoothing parameters (III) demonstrated that proper evaluation of co-activation should be based on individual subject analysis with minimal or no smoothing. Tactile input consistently influenced area 3b of the human ipsilateral SI cortex (IV). The observed phasic negative fMRI response is proposed to result from interhemispheric inhibition via trans-callosal connections. This thesis contributes to a growing body of human data suggesting that processing of tactile stimuli involves multiple brain areas, with different spatial patterns of cortical activation for different stimuli.
Resumo:
Autonomous mission control, unlike automatic mission control which is generally pre-programmed to execute an intended mission, is guided by the philosophy of carrying out a complete mission on its own through online sensing, information processing, and control reconfiguration. A crucial cornerstone of this philosophy is the capability of intelligence and of information sharing between unmanned aerial vehicles (UAVs) or with a central controller through secured communication links. Though several mission control algorithms, for single and multiple UAVs, have been discussed in the literature, they lack a clear definition of the various autonomous mission control levels. In the conventional system, the ground pilot issues the flight and mission control command to a UAV through a command data link and the UAV transmits intelligence information, back to the ground pilot through a communication link. Thus, the success of the mission depends entirely on the information flow through a secured communication link between ground pilot and the UAV In the past, mission success depended on the continuous interaction of ground pilot with a single UAV, while present day applications are attempting to define mission success through efficient interaction of ground pilot with multiple UAVs. However, the current trend in UAV applications is expected to lead to a futuristic scenario where mission success would depend only on interaction among UAV groups with no interaction with any ground entity. However, to reach this capability level, it is necessary to first understand the various levels of autonomy and the crucial role that information and communication plays in making these autonomy levels possible. This article presents a detailed framework of UAV autonomous mission control levels in the context of information flow and communication between UAVs and UAV groups for each level of autonomy.
Resumo:
The hot deformation behaviour of Mg–3Al alloy has been studied using the processing-map technique. Compression tests were conducted in the temperature range 250–550 °C and strain rate range 3 × 10−4 to 102 s−1 and the flow stress data obtained from the tests were used to develop the processing map. The various domains in the map corresponding to different dissipative characteristics have been identified as follows: (i) grain boundary sliding (GBS) domain accommodated by slip controlled by grain boundary diffusion at slow strain-rates (<10−3 s−1) in the temperature range from 350 to 450 °C, (ii) two different dynamic recrystallization (DRX) domains with a peak efficiency of 42% at 550 °C/10−1 s−1 and 425 °C/102 s−1 governed by stress-assisted cross-slip and thermally activated climb as the respective rate controlling mechanisms and (iii) dynamic recovery (DRV) domain below 300 °C in the intermediate strain rate range from 3 × 10−2 to 3 × 10−1 s−1. The regimes of flow instability have also been delineated in the processing map using an instability criterion. Adiabatic shear banding at higher strain rates (>101 s−1) and solute drag by substitutional Al atoms at intermediate strain rates (3 × 10−2 to 3 × 10−1 s−1) in the temperature range (350–450 °C) are responsible for flow instability. The relevance of these mechanisms with reference to hot working practice of the material has been indicated. The processing maps of Mg–3Al alloy and as-cast Mg have been compared qualitatively to elucidate the effect of alloying with aluminum on the deformation behaviour of magnesium.