926 resultados para Vision-Based Forced Landing
Resumo:
Chrysophyte cysts are recognized as powerful proxies of cold-season temperatures. In this paper we use the relationship between chrysophyte assemblages and the number of days below 4 °C (DB4 °C) in the epilimnion of a lake in northern Poland to develop a transfer function and to reconstruct winter severity in Poland for the last millennium. DB4 °C is a climate variable related to the length of the winter. Multivariate ordination techniques were used to study the distribution of chrysophytes from sediment traps of 37 low-land lakes distributed along a variety of environmental and climatic gradients in northern Poland. Of all the environmental variables measured, stepwise variable selection and individual Redundancy analyses (RDA) identified DB4 °C as the most important variable for chrysophytes, explaining a portion of variance independent of variables related to water chemistry (conductivity, chlorides, K, sulfates), which were also important. A quantitative transfer function was created to estimate DB4 °C from sedimentary assemblages using partial least square regression (PLS). The two-component model (PLS-2) had a coefficient of determination of View the MathML sourceRcross2 = 0.58, with root mean squared error of prediction (RMSEP, based on leave-one-out) of 3.41 days. The resulting transfer function was applied to an annually-varved sediment core from Lake Żabińskie, providing a new sub-decadal quantitative reconstruction of DB4 °C with high chronological accuracy for the period AD 1000–2010. During Medieval Times (AD 1180–1440) winters were generally shorter (warmer) except for a decade with very long and severe winters around AD 1260–1270 (following the AD 1258 volcanic eruption). The 16th and 17th centuries and the beginning of the 19th century experienced very long severe winters. Comparison with other European cold-season reconstructions and atmospheric indices for this region indicates that large parts of the winter variability (reconstructed DB4 °C) is due to the interplay between the oscillations of the zonal flow controlled by the North Atlantic Oscillation (NAO) and the influence of continental anticyclonic systems (Siberian High, East Atlantic/Western Russia pattern). Differences with other European records are attributed to geographic climatological differences between Poland and Western Europe (Low Countries, Alps). Striking correspondence between the combined volcanic and solar forcing and the DB4 °C reconstruction prior to the 20th century suggests that winter climate in Poland responds mostly to natural forced variability (volcanic and solar) and the influence of unforced variability is low.
Resumo:
This chapter proposed a personalized X-ray reconstruction-based planning and post-operative treatment evaluation framework called iJoint for advancing modern Total Hip Arthroplasty (THA). Based on a mobile X-ray image calibration phantom and a unique 2D-3D reconstruction technique, iJoint can generate patient-specific models of hip joint by non-rigidly matching statistical shape models to the X-ray radiographs. Such a reconstruction enables a true 3D planning and treatment evaluation of hip arthroplasty from just 2D X-ray radiographs whose acquisition is part of the standard diagnostic and treatment loop. As part of the system, a 3D model-based planning environment provides surgeons with hip arthroplasty related parameters such as implant type, size, position, offset and leg length equalization. With this newly developed system, we are able to provide true 3D solutions for computer assisted planning of THA using only 2D X-ray radiographs, which is not only innovative but also cost-effective.
Resumo:
Background: Sensor-based recordings of human movements are becoming increasingly important for the assessment of motor symptoms in neurological disorders beyond rehabilitative purposes. ASSESS MS is a movement recording and analysis system being developed to automate the classification of motor dysfunction in patients with multiple sclerosis (MS) using depth-sensing computer vision. It aims to provide a more consistent and finer-grained measurement of motor dysfunction than currently possible. Objective: To test the usability and acceptability of ASSESS MS with health professionals and patients with MS. Methods: A prospective, mixed-methods study was carried out at 3 centers. After a 1-hour training session, a convenience sample of 12 health professionals (6 neurologists and 6 nurses) used ASSESS MS to capture recordings of standardized movements performed by 51 volunteer patients. Metrics for effectiveness, efficiency, and acceptability were defined and used to analyze data captured by ASSESS MS, video recordings of each examination, feedback questionnaires, and follow-up interviews. Results: All health professionals were able to complete recordings using ASSESS MS, achieving high levels of standardization on 3 of 4 metrics (movement performance, lateral positioning, and clear camera view but not distance positioning). Results were unaffected by patients’ level of physical or cognitive disability. ASSESS MS was perceived as easy to use by both patients and health professionals with high scores on the Likert-scale questions and positive interview commentary. ASSESS MS was highly acceptable to patients on all dimensions considered, including attitudes to future use, interaction (with health professionals), and overall perceptions of ASSESS MS. Health professionals also accepted ASSESS MS, but with greater ambivalence arising from the need to alter patient interaction styles. There was little variation in results across participating centers, and no differences between neurologists and nurses. Conclusions: In typical clinical settings, ASSESS MS is usable and acceptable to both patients and health professionals, generating data of a quality suitable for clinical analysis. An iterative design process appears to have been successful in accounting for factors that permit ASSESS MS to be used by a range of health professionals in new settings with minimal training. The study shows the potential of shifting ubiquitous sensing technologies from research into the clinic through a design approach that gives appropriate attention to the clinic environment.
Resumo:
Context. On 12 November 2014 the European mission Rosetta succeeded in delivering a lander, named Philae, on the surface of one of the smallest, low-gravity and most primitive bodies of the solar system, the comet 67P/Churyumov-Gerasimenko (67P). Aims. The aim of this paper is to provide a comprehensive geomorphological and spectrophotometric analysis of Philae's landing site (Agilkia) to give an essential framework for the interpretation of its in situ measurements. Methods. OSIRIS images, coupled with gravitational slopes derived from the 3D shape model based on stereo-photogrammetry were used to interpret the geomorphology of the site. We adopted the Hapke model, using previously derived parameters, to photometrically correct the images in orange filter (649.2 nm). The best approximation to the Hapke model, given by the Akimov parameter-less function, was used to correct the reflectance for the effects of viewing and illumination conditions in the other filters. Spectral analyses on coregistered color cubes were used to retrieve spectrophotometric properties. Results. The landing site shows an average normal albedo of 6.7% in the orange filter with variations of similar to 15% and a global featureless spectrum with an average red spectral slope of 15.2%/100 nm between 480.7 nm (blue filter) and 882.1 nm (near-IR filter). The spatial analysis shows a well-established correlation between the geomorphological units and the photometric characteristics of the surface. In particular, smooth deposits have the highest reflectance a bluer spectrum than the outcropping material across the area. Conclusions. The featureless spectrum and the redness of the material are compatible with the results by other instruments that have suggested an organic composition. The observed small spectral variegation could be due to grain size effects. However, the combination of photometric and spectral variegation suggests that a compositional differentiation is more likely. This might be tentatively interpreted as the effect of the efficient dust-transport processes acting on 67P. High-activity regions might be the original sources for smooth fine-grained materials that then covered Agilkia as a consequence of airfall of residual material. More observations performed by OSIRIS as the comet approaches the Sun would help interpreting the processes that work at shaping the landing site and the overall nucleus.
Resumo:
Background: Diabetes mellitus is spreading throughout the world and diabetic individuals have been shown to often assess their food intake inaccurately; therefore, it is a matter of urgency to develop automated diet assessment tools. The recent availability of mobile phones with enhanced capabilities, together with the advances in computer vision, have permitted the development of image analysis apps for the automated assessment of meals. GoCARB is a mobile phone-based system designed to support individuals with type 1 diabetes during daily carbohydrate estimation. In a typical scenario, the user places a reference card next to the dish and acquires two images using a mobile phone. A series of computer vision modules detect the plate and automatically segment and recognize the different food items, while their 3D shape is reconstructed. Finally, the carbohydrate content is calculated by combining the volume of each food item with the nutritional information provided by the USDA Nutrient Database for Standard Reference. Objective: The main objective of this study is to assess the accuracy of the GoCARB prototype when used by individuals with type 1 diabetes and to compare it to their own performance in carbohydrate counting. In addition, the user experience and usability of the system is evaluated by questionnaires. Methods: The study was conducted at the Bern University Hospital, “Inselspital” (Bern, Switzerland) and involved 19 adult volunteers with type 1 diabetes, each participating once. Each study day, a total of six meals of broad diversity were taken from the hospital’s restaurant and presented to the participants. The food items were weighed on a standard balance and the true amount of carbohydrate was calculated from the USDA nutrient database. Participants were asked to count the carbohydrate content of each meal independently and then by using GoCARB. At the end of each session, a questionnaire was completed to assess the user’s experience with GoCARB. Results: The mean absolute error was 27.89 (SD 38.20) grams of carbohydrate for the estimation of participants, whereas the corresponding value for the GoCARB system was 12.28 (SD 9.56) grams of carbohydrate, which was a significantly better performance ( P=.001). In 75.4% (86/114) of the meals, the GoCARB automatic segmentation was successful and 85.1% (291/342) of individual food items were successfully recognized. Most participants found GoCARB easy to use. Conclusions: This study indicates that the system is able to estimate, on average, the carbohydrate content of meals with higher accuracy than individuals with type 1 diabetes can. The participants thought the app was useful and easy to use. GoCARB seems to be a well-accepted supportive mHealth tool for the assessment of served-on-a-plate meals.
Resumo:
Retinal detachment is a common ophthalmologic procedure, and outcome is typically measured by a single factor-improvement in visual acuity. Health related functional outcome testing, which quantifies patient's self-reported perception of impairment, can be integrated with objective clinical findings. Based on the patient's self-assessed lifestyle impairment, the physician and patient together can make an informed decision on the treatment that is most likely to benefit the patient. ^ A functional outcome test (the Houston Vision Assessment Test-Retina; HVAT-Retina) was developed and validated in patients with multiple retinal detachments in the same eye. The HVAT-Retina divides an estimated total impairment into subcomponents: contribution of visual disability (potentially correctable by retinal detachment surgery) and nonvisual physical disabilities (co-morbidities not affected by retinal detachment surgery. ^ Seventy-six patients participated in this prospective multicenter study. Seven patients were excluded from the analysis because they were not certain of their answers. Cronbach's alpha coefficient was 0.91 for presurgery HVAT-Retina and 0.94 post-surgery. The item-to-total correlation ranged from 0.50 to 0.88. Visual impairment score improved by 9 points from pre-surgery (p = 0.0003). Physical impairment score also improved from pre-surgery (p = 0.0002). ^ In conclusion, the results of this study demonstrate that the instrument is reliable and valid in patients presenting with recurrent retinal detachments. The HVAT-Retina is a simple instrument and does not burden the patient or the health professional in terms of time or cost. It may be self-administrated, not requiring an interviewer. Because the HVAT-Retina was designed to demonstrate outcomes perceivable by the patient, it has the potential to guide the decision making process between patient and physician. ^
Resumo:
In a marvelous but somewhat neglected paper, 'The Corporation: Will It Be Managed by Machines?' Herbert Simon articulated from the perspective of 1960 his vision of what we now call the New Economy the machine-aided system of production and management of the late twentieth century. Simon's analysis sprang from what I term the principle of cognitive comparative advantage: one has to understand the quite different cognitive structures of humans and machines (including computers) in order to explain and predict the tasks to which each will be most suited. Perhaps unlike Simon's better-known predictions about progress in artificial intelligence research, the predictions of this 1960 article hold up remarkably well and continue to offer important insights. In what follows I attempt to tell a coherent story about the evolution of machines and the division of labor between humans and machines. Although inspired by Simon's 1960 paper, I weave many other strands into the tapestry, from classical discussions of the division of labor to present-day evolutionary psychology. The basic conclusion is that, with growth in the extent of the market, we should see humans 'crowded into' tasks that call for the kinds of cognition for which humans have been equipped by biological evolution. These human cognitive abilities range from the exercise of judgment in situations of ambiguity and surprise to more mundane abilities in spatio-temporal perception and locomotion. Conversely, we should see machines 'crowded into' tasks with a well-defined structure. This conclusion is not based (merely) on a claim that machines, including computers, are specialized idiots-savants today because of the limits (whether temporary or permanent) of artificial intelligence; rather, it rests on a claim that, for what are broadly 'economic' reasons, it will continue to make economic sense to create machines that are idiots-savants.
Resumo:
This issue of the Family Preservation Journal combines two emerging interests in the fields of family preservation and family support. First, contemporary forces are making the world a smaller and smaller orb, and we see the plight of families and children around the globe on a daily basis. Our vision of families' needs is broadening, bringing with it questions about how services and systems support families in different cultures and under different governmental structures. Accompanying this global awareness is a greater emphasis on making service delivery and the evaluation of services more transparent to families. True to the original vision of family-based services, more and more agencies are incorporating consumers' perspectives into the design of services and are seeking their perspectives on what works and why.
Resumo:
Bromoform (CHBr3) is one important precursor of atmospheric reactive bromine species that are involved in ozone depletion in the troposphere and stratosphere. In the open ocean bromoform production is linked to phytoplankton that contains the enzyme bromoperoxidase. Coastal sources of bromoform are higher than open ocean sources. However, open ocean emissions are important because the transfer of tracers into higher altitude in the air, i.e. into the ozone layer, strongly depends on the location of emissions. For example, emissions in the tropics are more rapidly transported into the upper atmosphere than emissions from higher latitudes. Global spatio-temporal features of bromoform emissions are poorly constrained. Here, a global three-dimensional ocean biogeochemistry model (MPIOM-HAMOCC) is used to simulate bromoform cycling in the ocean and emissions into the atmosphere using recently published data of global atmospheric concentrations (Ziska et al., 2013) as upper boundary conditions. Our simulated surface concentrations of CHBr3 match the observations well. Simulated global annual emissions based on monthly mean model output are lower than previous estimates, including the estimate by Ziska et al. (2013), because the gas exchange reverses when less bromoform is produced in non-blooming seasons. This is the case for higher latitudes, i.e. the polar regions and northern North Atlantic. Further model experiments show that future model studies may need to distinguish different bromoform-producing phytoplankton species and reveal that the transport of CHBr3 from the coast considerably alters open ocean bromoform concentrations, in particular in the northern sub-polar and polar regions.
Resumo:
During the past five million yrs, benthic d18O records indicate a large range of climates, from warmer than today during the Pliocene Warm Period to considerably colder during glacials. Antarctic ice cores have revealed Pleistocene glacial-interglacial CO2 variability of 60-100 ppm, while sea level fluctuations of typically 125 m are documented by proxy data. However, in the pre-ice core period, CO2 and sea level proxy data are scarce and there is disagreement between different proxies and different records of the same proxy. This hampers comprehensive understanding of the long-term relations between CO2, sea level and climate. Here, we drive a coupled climate-ice sheet model over the past five million years, inversely forced by a stacked benthic d18O record. We obtain continuous simulations of benthic d18O, sea level and CO2 that are mutually consistent. Our model shows CO2 concentrations of 300 to 470 ppm during the Early Pliocene. Furthermore, we simulate strong CO2 variability during the Pliocene and Early Pleistocene. These features are broadly supported by existing and new d11B-based proxy CO2 data, but less by alkenone-based records. The simulated concentrations and variations therein are larger than expected from global mean temperature changes. Our findings thus suggest a smaller Earth System Sensitivity than previously thought. This is explained by a more restricted role of land ice variability in the Pliocene. The largest uncertainty in our simulation arises from the mass balance formulation of East Antarctica, which governs the variability in sea level, but only modestly affects the modeled CO2 concentrations.
Resumo:
Production pathways of the prominent volatile organic halogen compound methyl iodide (CH3I) are not fully understood. Based on observations, production of CH3I via photochemical degradation of organic material or via phytoplankton production has been proposed. Additional insights could not be gained from correlations between observed biological and environmental variables or from biogeochemical modeling to identify unambiguously the source of methyl iodide. In this study, we aim to address this question of source mechanisms with a three-dimensional global ocean general circulation model including biogeochemistry (MPIOM-HAMOCC (MPIOM - Max Planck Institute Ocean Model HAMOCC - HAMburg Ocean Carbon Cycle model)) by carrying out a series of sensitivity experiments. The simulated fields are compared with a newly available global data set. Simulated distribution patterns and emissions of CH3I differ largely for the two different production pathways. The evaluation of our model results with observations shows that, on the global scale, observed surface concentrations of CH3I can be best explained by the photochemical production pathway. Our results further emphasize that correlations between CH3I and abiotic or biotic factors do not necessarily provide meaningful insights concerning the source of origin. Overall, we find a net global annual CH3I air-sea flux that ranges between 70 and 260 Gg/yr. On the global scale, the ocean acts as a net source of methyl iodide for the atmosphere, though in some regions in boreal winter, fluxes are of the opposite direction (from the atmosphere to the ocean).
Resumo:
The emerging use of real-time 3D-based multimedia applications imposes strict quality of service (QoS) requirements on both access and core networks. These requirements and their impact to provide end-to-end 3D videoconferencing services have been studied within the Spanish-funded VISION project, where different scenarios were implemented showing an agile stereoscopic video call that might be offered to the general public in the near future. In view of the requirements, we designed an integrated access and core converged network architecture which provides the requested QoS to end-to-end IP sessions. Novel functional blocks are proposed to control core optical networks, the functionality of the standard ones is redefined, and the signaling improved to better meet the requirements of future multimedia services. An experimental test-bed to assess the feasibility of the solution was also deployed. In such test-bed, set-up and release of end-to-end sessions meeting specific QoS requirements are shown and the impact of QoS degradation in terms of the user perceived quality degradation is quantified. In addition, scalability results show that the proposed signaling architecture is able to cope with large number of requests introducing almost negligible delay.
Resumo:
Some requirements for engineering programmes, such as an ability to use the techniques, skills and modern engineering tools necessary for engineering practice, as well as an understanding of professional and ethical responsibility or an ability to communicate effectively, need new activities designed for measuring students’ progress. Negotiations take place continuously at any stage of a project and, so, the ability of engineers and managers to effectively carry out a negotiation is crucial for the success or failure of projects and businesses. Since it involves communication between individuals motivated to come together in an agreement for mutual benefit, it can be used to enhance these personal abilities. The main objective of this study was to evaluate the adequacy of mixing playing sessions and theory to maximise the students’ strategic vision in combination with negotiating skills. Results show that the combination of playing with theoretical training teaches students to strategise through analysis and discussion of alternatives. The outcome is then more optimised.
Resumo:
We present a novel framework for encoding latency analysis of arbitrary multiview video coding prediction structures. This framework avoids the need to consider an specific encoder architecture for encoding latency analysis by assuming an unlimited processing capacity on the multiview encoder. Under this assumption, only the influence of the prediction structure and the processing times have to be considered, and the encoding latency is solved systematically by means of a graph model. The results obtained with this model are valid for a multiview encoder with sufficient processing capacity and serve as a lower bound otherwise. Furthermore, with the objective of low latency encoder design with low penalty on rate-distortion performance, the graph model allows us to identify the prediction relationships that add higher encoding latency to the encoder. Experimental results for JMVM prediction structures illustrate how low latency prediction structures with a low rate-distortion penalty can be derived in a systematic manner using the new model.
Resumo:
Industrial applications of computer vision sometimes require detection of atypical objects that occur as small groups of pixels in digital images. These objects are difficult to single out because they are small and randomly distributed. In this work we propose an image segmentation method using the novel Ant System-based Clustering Algorithm (ASCA). ASCA models the foraging behaviour of ants, which move through the data space searching for high data-density regions, and leave pheromone trails on their path. The pheromone map is used to identify the exact number of clusters, and assign the pixels to these clusters using the pheromone gradient. We applied ASCA to detection of microcalcifications in digital mammograms and compared its performance with state-of-the-art clustering algorithms such as 1D Self-Organizing Map, k-Means, Fuzzy c-Means and Possibilistic Fuzzy c-Means. The main advantage of ASCA is that the number of clusters needs not to be known a priori. The experimental results show that ASCA is more efficient than the other algorithms in detecting small clusters of atypical data.