965 resultados para Freezing and processing
Resumo:
"May 12, 2006."
Resumo:
"National Reactor Testing Station"--Cover.
Resumo:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
Resumo:
Research on semantic processing focused mainly on isolated units in language, which does not reflect the complexity of language. In order to understand how semantic information is processed in a wider context, the first goal of this thesis was to determine whether Swedish pre-school children are able to comprehend semantic context and if that context is semantically built up over time. The second goal was to investigate how the brain distributes attentional resources by means of brain activation amplitude and processing type. Swedish preschool children were tested in a dichotic listening task with longer children’s narratives. The development of event-related potential N400 component and its amplitude were used to investigate both goals. The decrease of the N400 in the attended and unattended channel indicated semantic comprehension and that semantic context was built up over time. The attended stimulus received more resources, processed the stimuli in more of a top-down manner and displayed prominent N400 amplitude in contrast to the unattended stimulus. The N400 and the late positivity were more complex than expected since endings of utterances longer than nine words were not accounted for. More research on wider linguistic context is needed in order to understand how the human brain comprehends natural language.
Resumo:
The world's largest fossil oyster reef, formed by the giant oyster Crassostrea gryphoides and located in Stetten (north of Vienna, Austria) is studied by Harzhauser et al., 2015, 2016; Djuricic et al., 2016. Digital documentation of the unique geological site is provided by terrestrial laser scanning (TLS) at the millimeter scale. Obtaining meaningful results is not merely a matter of data acquisition with a suitable device; it requires proper planning, data management, and postprocessing. Terrestrial laser scanning technology has a high potential for providing precise 3D mapping that serves as the basis for automatic object detection in different scenarios; however, it faces challenges in the presence of large amounts of data and the irregular geometry of an oyster reef. We provide a detailed description of the techniques and strategy used for data collection and processing in Djuricic et al., 2016. The use of laser scanning provided the ability to measure surface points of 46,840 (estimated) shells. They are up to 60-cm-long oyster specimens, and their surfaces are modeled with a high accuracy of 1 mm. In addition to laser scanning measurements, more than 300 photographs were captured, and an orthophoto mosaic was generated with a ground sampling distance (GSD) of 0.5 mm. This high-resolution 3D information and the photographic texture serve as the basis for ongoing and future geological and paleontological analyses. Moreover, they provide unprecedented documentation for conservation issues at a unique natural heritage site.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Background: Flexible video bronchoscopes, in particular the Olympus BF Type 3C160, are commonly used in pediatric respiratory medicine. There is no data on the magnification and distortion effects of these bronchoscopes yet important clinical decisions are made from the images. The aim of this study was to systematically describe the magnification and distortion of flexible bronchoscope images taken at various distances from the object. Methods: Using images of known objects and processing these by digital video and computer programs both magnification and distortion scales were derived. Results: Magnification changes as a linear function between 100 mm ( x 1) and 10 mm ( x 9.55) and then as an exponential function between 10 mm and 3 mm ( x 40) from the object. Magnification depends on the axis of orientation of the object to the optic axis or geometrical axis of the bronchoscope. Magnification also varies across the field of view with the central magnification being 39% greater than at the periphery of the field of view at 15 mm from the object. However, in the paediatric situation the diameter of the orifices is usually less than 10 mm and thus this limits the exposure to these peripheral limits of magnification reduction. Intraclass correlations for measurements and repeatability studies between instruments are very high, r = 0.96. Distortion occurs as both barrel and geometric types but both types are heterogeneous across the field of view. Distortion of geometric type ranges up to 30% at 3 mm from the object but may be as low as 5% depending on the position of the object in relation to the optic axis. Conclusion: We conclude that the optimal working distance range is between 40 and 10 mm from the object. However the clinician should be cognisant of both variations in magnification and distortion in clinical judgements.
Resumo:
Stickiness is a common problem encountered in food handling and processing, and also during consumption. Stickiness is observed as adhesion of the food to processing equipment surfaces or cohesion within the food particulate or mass. An important operation where this undesirable behavior of food is manifested is drying. This occurs particularly during drying of high-sugar and high-fat foods. To date, the stickiness of foods during drying or dried powder has been investigated in relation to their viscous and glass transition properties. The importance of contact surface energy of the equipment has been ignored in many analyses, despite the fact that some drying operations have reported using low-energy contact surfaces in drying equipment to avoid the problems caused by stickiness. This review discusses the fundamentals of adhesion and cohesion mechanisms and relates these phenomena to drying and dried products.
Resumo:
In Queensland, Australia, there is presently a high level of interest in long-rotation hardwood plantation investments for sawlog production, despite the consensus in Australian literature that such investments are not financially viable. Continuing genetics, silviculture and processing research, and increasing awareness about the ecosystem services generated by plantations, are anticipated to make future plantings profitable and socio-economically desirable in many parts of Queensland. Financial and economic models of hardwood plantations in Queensland are developed to test this hypothesis. The economic model accounts for carbon sequestration, salinity amelioration and other ecosystem service values of hardwood plantations. A carbon model estimates the value of carbon sequestered, while salinity and other ecosystem service values are estimated by the benefit transfer method. Where high growth rates (20-25 m(3) ha(-1) year(-1)) are achievable, long-rotation hardwood plantations are profitable in Queensland Hardwood Regions 1, 3 and 7 when rural land values are less than $2300/ha. Under optimistic assumptions, hardwood plantations growing at a rate of 15 in 3 ha-1 year 1 are financially viable in Hardwood Regions 2, 4 and 8, provided land values are less than $1600/ha. The major implication of the economic analysis is that long-rotation hardwood plantation forestry is socio-economically justified in most Hardwood Regions, even though financial returns from timber production may be negative. (c) 2003 Elsevier B.V. All rights reserved.
Resumo:
We review the field of quantum optical information from elementary considerations to quantum computation schemes. We illustrate our discussion with descriptions of experimental demonstrations of key communication and processing tasks from the last decade and also look forward to the key results likely in the next decade. We examine both discrete (single photon) type processing as well as those which employ continuous variable manipulations. The mathematical formalism is kept to the minimum needed to understand the key theoretical and experimental results.
Resumo:
Recent developments in service-oriented and distributed computing have created exciting opportunities for the integration of models in service chains to create the Model Web. This offers the potential for orchestrating web data and processing services, in complex chains; a flexible approach which exploits the increased access to products and tools, and the scalability offered by the Web. However, the uncertainty inherent in data and models must be quantified and communicated in an interoperable way, in order for its effects to be effectively assessed as errors propagate through complex automated model chains. We describe a proposed set of tools for handling, characterizing and communicating uncertainty in this context, and show how they can be used to 'uncertainty- enable' Web Services in a model chain. An example implementation is presented, which combines environmental and publicly-contributed data to produce estimates of sea-level air pressure, with estimates of uncertainty which incorporate the effects of model approximation as well as the uncertainty inherent in the observational and derived data.
Resumo:
Over the past two decades, the European Union (EU) has played an increasingly influential role in the construction of a de facto common immigration and asylum policy, providing a forum for policy-formulation beyond the scrutiny of national parliaments. The guiding principles of this policy include linking the immigration portfolio to security rather than justice; reaffirming the importance of political, conceptual and organizational borders; and attempting to transfer policing and processing functions to non-EU countries. The most important element, I argue, is the structural racialization of immigration that occurs across the various processes and which escapes the focus of much academic scrutiny. Exploring this phenomenon through the concept of the “racial state,” I examine ways to understand the operations of immigration policy-making at the inter-governmental level, giving particular attention to the ways in which asylum-seekers emerge as a newly racialized group who are both stripped of their rights in the global context and deployed as Others in the construction of national narratives.
Resumo:
The aim of this work has been to investigate the principle of combined centrifugal bioreaction-separation. The production of dextran and fructose by the action of the enzyme dextransucrase on sucrose was employed to elucidate some of the principles of this type of process. Dextran is a valuable pharmaceutical product used mainly as a blood volume expander and blood flow improver whilst fructose is an important dietary product. The development of a single step process capable of the simultaneous biosynthesis of dextran and the separation of the fructose by-product should improve dextran yields whilst reducing capital and processing costs. This thesis shows for the first time that it is possible to conduct successful bioreaction-separations using a rate-zonal centrifugation technique. By layering thin zones of dextrasucrase enzyme onto sucrose gradients and centrifuging, very high molecular weight (MW) dextran-enzyme complexes were formed that rapidly sedimented through the sucrose substrate gradients under the influence of the applied centrifugal field. The low MW fructose by-product sedimented at reduced rates and was thus separated from the enzyme and dextran during the reaction. The MW distribution of dextran recovered from the centrifugal bioreactor was compared with that from a conventional batch bioreactor. The results indicated that the centrifugal bioreactor produced up to 100% more clinical dextran with MWs of between 12 000 and 98 000 at 20% w/w sucrose concentrations than conventional bioreactors. This was due to the removal of acceptor fructose molecules from the sedimenting reaction zone by the action of the centrifugal field. Higher proportions of unwanted lower MW dextran were found in the conventional bioreactor than in the centrifugal bioreactor-separator. The process was studied on a number of alternative centrifugal systems. A zonal rotor fitted with a reorienting gradient core proved most successful for the evaluation of bioreactor performance. Results indicated that viscosity build-up in the reactor must be minimised in order to increase the yields of dextran per unit time and improve product separation. A preliminary attempt at modelling the process has also been made.
Resumo:
This thesis sets out to investigate the role of cohesion in the organisation and processing of three text types in English and Arabic. In other words, it attempts to shed some light on the descriptive and explanatory power of cohesion in different text typologies. To this effect, three text types, namely, literary fictional narrative, newspaper editorial and science were analysed to ascertain the intra- and inter-sentential trends in textual cohesion characteristic of each text type in each language. In addition, two small scale experiments which aimed at exploring the facilitatory effect of one cohesive device (i.e. lexical repetition) on the comprehension of three English text types by Arab learners were carried out. The first experiment examined this effect in an English science text; the second covered three English text types, i.e. fictional narrative, culturally-oriented and science. Some interesting and significant results have emerged from the textual analysis and the pilot studies. Most importantly, each text type tends to utilize the cohesive trends that are compatible with its readership, reader knowledge, reading style and pedagogical purpose. Whereas fictional narratives largely cohere through pronominal co-reference, editorials and science texts derive much cohesion from lexical repetition. As for cross-language differences English opts for economy in the use of cohesive devices, while Arabic largely coheres through the redundant effect created by the high frequency of most of those devices. Thus, cohesion is proved to be a variable rather than a homogeneous phenomenon which is dictated by text type among other factors. The results of the experiments suggest that lexical repetition does facilitate the comprehension of English texts by Arab learners. Fictional narratives are found to be easier to process and understand than expository texts. Consequently, cohesion can assist in the processing of text as it can in its creation.