110 resultados para Java Remote Method Invocation
em Universit
Resumo:
An adaptation technique based on the synoptic atmospheric circulation to forecast local precipitation, namely the analogue method, has been implemented for the western Swiss Alps. During the calibration procedure, relevance maps were established for the geopotential height data. These maps highlight the locations were the synoptic circulation was found of interest for the precipitation forecasting at two rain gauge stations (Binn and Les Marécottes) that are located both in the alpine Rhône catchment, at a distance of about 100 km from each other. These two stations are sensitive to different atmospheric circulations. We have observed that the most relevant data for the analogue method can be found where specific atmospheric circulation patterns appear concomitantly with heavy precipitation events. Those skilled regions are coherent with the atmospheric flows illustrated, for example, by means of the back trajectories of air masses. Indeed, the circulation recurrently diverges from the climatology during days with strong precipitation on the southern part of the alpine Rhône catchment. We have found that for over 152 days with precipitation amount above 50 mm at the Binn station, only 3 did not show a trajectory of a southerly flow, meaning that such a circulation was present for 98% of the events. Time evolution of the relevance maps confirms that the atmospheric circulation variables have significantly better forecasting skills close to the precipitation period, and that it seems pointless for the analogue method to consider circulation information days before a precipitation event as a primary predictor. Even though the occurrence of some critical circulation patterns leading to heavy precipitation events can be detected by precursors at remote locations and 1 week ahead (Grazzini, 2007; Martius et al., 2008), time extrapolation by the analogue method seems to be rather poor. This would suggest, in accordance with previous studies (Obled et al., 2002; Bontron and Obled, 2005), that time extrapolation should be done by the Global Circulation Model, which can process atmospheric variables that can be used by the adaptation method.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.
Resumo:
A semisupervised support vector machine is presented for the classification of remote sensing images. The method exploits the wealth of unlabeled samples for regularizing the training kernel representation locally by means of cluster kernels. The method learns a suitable kernel directly from the image and thus avoids assuming a priori signal relations by using a predefined kernel structure. Good results are obtained in image classification examples when few labeled samples are available. The method scales almost linearly with the number of unlabeled samples and provides out-of-sample predictions.
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
The aim of this retrospective study was to compare the clinical and radiographic results after TKA (PFC, DePuy), performed either by computer assisted navigation (CAS, Brainlab, Johnson&Johnson) or by conventional means. Material and methods: Between May and December 2006 we reviewed 36 conventional TKA performed between 2002 and 2003 (group A) and 37 navigated TKA performed between 2005 and 2006 (group B) by the same experienced surgeon. The mean age in group A was 74 years (range 62-90) and 73 (range 58-85) in group B with a similar age distribution. The preoperative mechanical axes in group A ranged from -13° varus to +13° valgus (mean absolute deviation 6.83°, SD 3.86), in group B from -13° to +16° (mean absolute deviation 5.35, SD 4.29). Patients with a previous tibial osteotomy or revision arthroplasty were excluded from the study. Examination was done by an experienced orthopedic resident independent of the surgeon. All patients had pre- and postoperative long standing radiographs. The IKSS and the WOMAC were utilized to determine the clinical outcome. Patient's degree of satisfaction was assessed on a visual analogous scale (VAS). Results: 32 of the 37 navigated TKAs (86,5%) showed a postoperative mechanical axis within the limits of 3 degrees of valgus or varus deviation compared to only 24 (66%) of the 36 standard TKAs. This difference was significant (p = 0.045). The mean absolute deviation from neutral axis was 3.00° (range -5° to +9°, SD: 1.75) in group A in comparison to 1.54° (range -5° to +4°, SD: 1.41) in group B with a highly significant difference (p = 0.000). Furthermore, both groups showed a significant postoperative improvement of their mean IKSS-values (group A: 89 preoperative to 169 postoperative, group B 88 to 176) without a significant difference between the two groups. Neither the WOMAC nor the patient's degree of satisfaction - as assessed by VAS - showed significant differences. Operation time was significantly higher in group B (mean 119.9 min.) than in group A (mean 99.6 min., p <0.000). Conclusion: Our study showed consistent significant improvement of postoperative frontal alignment in TKA by computer assisted navigation (CAS) compared to standard methods, even in the hands of a surgeon well experienced in standard TKA implantation. However, the follow-up time of this study was not long enough to judge differences in clinical outcome. Thus, the relevance of computer navigation for clinical outcome and survival of TKA remains to be proved in long term studies to justify the longer operation time. References 1 Stulberg SD. Clin Orth Rel Res. 2003;(416):177-84. 2 Chauhan SK. JBJS Br. 2004;86(3):372-7. 3 Bäthis H, et al. Orthopäde. 2006;35(10):1056-65.
Resumo:
Drug abuse is a widespread problem affecting both teenagers and adults. Nitrous oxide is becoming increasingly popular as an inhalation drug, causing harmful neurological and hematological effects. Some gas chromatography-mass spectrometry (GC-MS) methods for nitrous oxide measurement have been previously described. The main drawbacks of these methods include a lack of sensitivity for forensic applications; including an inability to quantitatively determine the concentration of gas present. The following study provides a validated method using HS-GC-MS which incorporates hydrogen sulfide as a suitable internal standard allowing the quantification of nitrous oxide. Upon analysis, sample and internal standard have similar retention times and are eluted quickly from the molecular sieve 5Å PLOT capillary column and the Porabond Q column therefore providing rapid data collection whilst preserving well defined peaks. After validation, the method has been applied to a real case of N2O intoxication indicating concentrations in a mono-intoxication.
Resumo:
Dispersal mechanisms and competition together play a key role in the spatial distribution of a population. Species that disperse via fission are likely to experience high levels of localized competitive pressure from conspecifics relative to species that disperse in other ways. Although fission dispersal occurs in many species, its ecological and behavioural effects remain unclear. We compared foraging effort, nest spatial distribution and aggression of two sympatric ant species that differ in reproductive dispersal: Streblognathus peetersi, which disperse by group fission, and Plectroctena mandibularis, which disperse by solitary wingless queens. We found that although both species share space and have similar foraging strategies, they differ in nest distribution and aggressive behaviour. The spatial distribution of S. peetersi nests was extremely aggregated, and workers were less aggressive towards conspecifics from nearby nests than towards distant conspecifics and all heterospecific workers. By contrast, the spatial distribution of P. mandibularis nests was overdispersed, and workers were equally aggressive towards conspecific and heterospecific competitors regardless of nest distance. Finally, laboratory experiments showed that familiarity led to the positive relationship between aggression and nest distance in S. peetersi. While unfamiliar individuals were initially aggressive, the level of aggression decreased within 1 h of contact, and continued to decrease over 24 h. Furthermore, individuals from near nests that were not aggressive could be induced to aggression after prolonged isolation. Overall, these results suggest that low aggression mediated by familiarity could provide benefits for a species with fission reproduction and an aggregated spatial distribution.
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.
Resumo:
Q-sort is a research method which allows defining profiles of attitudes toward a set of statements, ordered in relation to each other. Pertaining to the Q Methodology, the qualitative analysis of the Q-sorts is based on quantitative techniques. This method is of particular interest for research in health professions, a field in which attitudes of patients and professionals are very important. The method is presented in this article, along with an example of application in nursing in old age psychiatry.
Resumo:
In contemporary society, religious signification and secular systems mix and influence each other. Holistic conceptions of a world in which man is integrated harmoniously with nature meet representations of a world run by an immanent God. On the market of the various systems, the individual goes from one system to another, following his immediate needs and expectations without necessarily leaving any marks in a meaningful long term system. This article presents the first results of an ongoing research in Switzerland on contemporary religion focusing on (new) paths of socialization of modern that individuals and the various (non-) belief systems that they simultaneously develop
Resumo:
A novel approach to measure carbon dioxide (CO2) in gaseous samples, based on a precise and accurate quantification by (13)CO2 internal standard generated in situ is presented. The main goal of this study was to provide an innovative headspace-gas chromatography-mass spectrometry (HS-GC-MS) method applicable in the routine determination of CO2. The main drawback of the GC methods discussed in the literature for CO2 measurement is the lack of a specific internal standard necessary to perform quantification. CO2 measurement is still quantified by external calibration without taking into account analytical problems which can often occur considering gaseous samples. To avoid the manipulation of a stable isotope-labeled gas, we have chosen to generate in situ an internal labeled standard gas ((13)CO2) on the basis of the stoichiometric formation of CO2 by the reaction of hydrochloric acid (HCl) with sodium hydrogen carbonate (NaH(13)CO3). This method allows a precise measurement of CO2 concentration and was validated on various human postmortem gas samples in order to study its efficiency.
Resumo:
Intraoperative cardiac imaging plays a key role during transcatheter aortic valve replacement. In recent years, new techniques and new tools for improved image quality and virtual navigation have been proposed, in order to simplify and standardize stent valve positioning and implantation. But routine performance of the new techniques may require major economic investments or specific knowledge and skills and, for this reason, they may not be accessible to the majority of cardiac centres involved in transcatheter valve replacement projects. Additionally, they still require injections of contrast medium to obtain computed images. Therefore, we have developed and describe here a very simple and intuitive method of positioning balloon-expandable stent valves, which represents the evolution of the 'dumbbell' technique for echocardiography-guided transcatheter valve replacement without angiography. This method, based on the partial inflation of the balloon catheter during positioning, traps the crimped valve in the aortic valve orifice and, consequently, very near to the ideal landing zone. It does not require specific echocardiographic knowledge; it does not require angiographies that increase the risk of postoperative kidney failure in elderly patients, and it can be also performed in centres not equipped with a hybrid operating room.
Resumo:
Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.