13 resultados para CMF, molecular cloud, extraction algorithm

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, ontologies describe knowledge representation in a denotational, formalized, and deductive way. In addition, in this paper, we propose a semiotic, inductive, and approximate approach to ontology creation. We define a conceptual framework, a semantics extraction algorithm, and a first proof of concept applying the algorithm to a small set of Wikipedia documents. Intended as an extension to the prevailing top-down ontologies, we introduce an inductive fuzzy grassroots ontology, which organizes itself organically from existing natural language Web content. Using inductive and approximate reasoning to reflect the natural way in which knowledge is processed, the ontology’s bottom-up build process creates emergent semantics learned from the Web. By this means, the ontology acts as a hub for computing with words described in natural language. For Web users, the structural semantics are visualized as inductive fuzzy cognitive maps, allowing an initial form of intelligence amplification. Eventually, we present an implementation of our inductive fuzzy grassroots ontology Thus,this paper contributes an algorithm for the extraction of fuzzy grassroots ontologies from Web data by inductive fuzzy classification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The near-real time retrieval of low stratiform cloud (LSC) coverage is of vital interest for such disciplines as meteorology, transport safety, economy and air quality. Within this scope, a novel methodology is proposed which provides the LSC occurrence probability estimates for a satellite scene. The algorithm is suited for the 1 × 1 km Advanced Very High Resolution Radiometer (AVHRR) data and was trained and validated against collocated SYNOP observations. Utilisation of these two combined data sources requires a formulation of constraints in order to discriminate cases where the LSC is overlaid by higher clouds. The LSC classification process is based on six features which are first converted to the integer form by step functions and combined by means of bitwise operations. Consequently, a set of values reflecting a unique combination of those features is derived which is further employed to extract the LSC occurrence probability estimates from the precomputed look-up vectors (LUV). Although the validation analyses confirmed good performance of the algorithm, some inevitable misclassification with other optically thick clouds were reported. Moreover, the comparison against Polar Platform System (PPS) cloud-type product revealed superior classification accuracy. From the temporal perspective, the acquired results reported a presence of diurnal and annual LSC probability cycles over Europe.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recently, the ROSINA mass spectrometer suite on board the European Space Agency's Rosetta spacecraft discovered an abundant amount of molecular oxygen, O2, in the coma of Jupiter family comet 67P/Churyumov-Gerasimenko of O2/H2O = 3.80 ± 0.85%. It could be shown that O2 is indeed a parent species and that the derived abundances point to a primordial origin. Crucial questions are whether the O2 abundance is peculiar to comet 67P/Churyumov-Gerasimenko or Jupiter family comets in general, and also whether Oort cloud comets such as comet 1P/Halley contain similar amounts of molecular oxygen. We investigated mass spectra obtained by the Neutral Mass Spectrometer instrument during the flyby by the European Space Agency's Giotto probe of comet 1P/Halley. Our investigation indicates that a production rate of O2 of 3.7 ± 1.7% with respect to water is indeed compatible with the obtained Halley data and therefore that O2 might be a rather common and abundant parent species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In most pathology laboratories worldwide, formalin-fixed paraffin embedded (FFPE) samples are the only tissue specimens available for routine diagnostics. Although commercial kits for diagnostic molecular pathology testing are becoming available, most of the current diagnostic tests are laboratory-based assays. Thus, there is a need for standardized procedures in molecular pathology, starting from the extraction of nucleic acids. To evaluate the current methods for extracting nucleic acids from FFPE tissues, 13 European laboratories, participating to the European FP6 program IMPACTS (www.impactsnetwork.eu), isolated nucleic acids from four diagnostic FFPE tissues using their routine methods, followed by quality assessment. The DNA-extraction protocols ranged from homemade protocols to commercial kits. Except for one homemade protocol, the majority gave comparable results in terms of the quality of the extracted DNA measured by the ability to amplify differently sized control gene fragments by PCR. For array-applications or tests that require an accurately determined DNA-input, we recommend using silica based adsorption columns for DNA recovery. For RNA extractions, the best results were obtained using chromatography column based commercial kits, which resulted in the highest quantity and best assayable RNA. Quality testing using RT-PCR gave successful amplification of 200 bp-250 bp PCR products from most tested tissues. Modifications of the proteinase-K digestion time led to better results, even when commercial kits were applied. The results of the study emphasize the need for quality control of the nucleic acid extracts with standardised methods to prevent false negative results and to allow data comparison among different diagnostic laboratories.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic identification and extraction of bone contours from X-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated X-ray images. The automatic initialization is solved by an estimation of Bayesian network algorithm to fit a multiple component geometrical model to the X-ray data. The contour extraction is accomplished by a non-rigid 2D/3D registration between a 3D statistical model and the X-ray images, in which bone contours are extracted by a graphical model based Bayesian inference. Preliminary experiments on clinical data sets verified its validity

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Molecular diagnosis of canine bartonellosis can be extremely challenging and often requires the use of an enrichment culture approach followed by PCR amplification of bacterial DNA. HYPOTHESES: (1) The use of enrichment culture with PCR will increase molecular detection of bacteremia and will expand the diversity of Bartonella species detected. (2) Serological testing for Bartonella henselae and Bartonella vinsonii subsp. berkhoffii does not correlate with documentation of bacteremia. ANIMALS: Between 2003 and 2009, 924 samples from 663 dogs were submitted to the North Carolina State University, College of Veterinary Medicine, Vector Borne Diseases Diagnostic Laboratory for diagnostic testing with the Bartonella α-Proteobacteria growth medium (BAPGM) platform. Test results and medical records of those dogs were retrospectively reviewed. METHODS: PCR amplification of Bartonella sp. DNA after extraction from patient samples was compared with PCR after BAPGM enrichment culture. Indirect immunofluorescent antibody assays, used to detect B. henselae and B. vinsonii subsp. berkhoffii antibodies, were compared with PCR. RESULTS: Sixty-one of 663 dogs were culture positive or had Bartonella DNA detected by PCR, including B. henselae (30/61), B. vinsonii subsp. berkhoffii (17/61), Bartonella koehlerae (7/61), Bartonella volans-like (2/61), and Bartonella bovis (2/61). Coinfection with more than 1 Bartonella sp. was documented in 9/61 dogs. BAPGM culture was required for PCR detection in 32/61 cases. Only 7/19 and 4/10 infected dogs tested by IFA were B. henselae and B. vinsonii subsp. berkhoffii seroreactive, respectively. CONCLUSIONS AND CLINICAL IMPORTANCE: Dogs were most often infected with B. henselae or B. vinsonii subsp. berkhoffii based on PCR and enrichment culture, coinfection was documented, and various Bartonella species were identified. Most infected dogs did not have detectable Bartonella antibodies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The two major subtypes of diffuse large B-cell lymphoma (DLBCL) (germinal centre B-cell - like (GCB-DLBCL) and activated B-cell - like (ABC-DLBCL)) are defined by means of gene expression profiling (GEP). Patients with GCB-DLBCL survive longer with the current standard regimen R-CHOP than patients with ABC-DLBCL. As GEP is not part of the current routine diagnostic work-up, efforts have been made to find a substitute than involves immunohistochemistry (IHC). Various algorithms achieved this with 80-90% accuracy. However, conflicting results on the appropriateness of IHC have been reported. Because it is likely that the molecular subtypes will play a role in future clinical practice, we assessed the determination of the molecular DLBCL subtypes by means of IHC at our University Hospital, and some aspects of this determination elsewhere in Switzerland. The most frequently used Hans algorithm includes three antibodies (against CD10, bcl-6 and MUM1). From records of the routine diagnostic work-up, we identified 51 of 172 (29.7%) newly diagnosed and treated DLBCL cases from 2005 until 2010 with an assigned DLBCL subtype. DLBCL subtype information was expanded by means of tissue microarray analysis. The outcome for patients with the GCB subtype was significantly better compared with those with the non-GC subtype, independent of the age-adjusted International Prognostic Index. We found a lack of standardisation in the subtype determination by means of IHC in Switzerland and significant problems of reproducibility. We conclude that the Hans algorithm performs well in our hands and that awareness of this important matter is increasing. However, outside clinical trials, vigorous efforts to standardise IHC determination are needed as DLBCL subtype-specific therapies emerge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Derivation of probability estimates complementary to geophysical data sets has gained special attention over the last years. Information about a confidence level of provided physical quantities is required to construct an error budget of higher-level products and to correctly interpret final results of a particular analysis. Regarding the generation of products based on satellite data a common input consists of a cloud mask which allows discrimination between surface and cloud signals. Further the surface information is divided between snow and snow-free components. At any step of this discrimination process a misclassification in a cloud/snow mask propagates to higher-level products and may alter their usability. Within this scope a novel probabilistic cloud mask (PCM) algorithm suited for the 1 km × 1 km Advanced Very High Resolution Radiometer (AVHRR) data is proposed which provides three types of probability estimates between: cloudy/clear-sky, cloudy/snow and clear-sky/snow conditions. As opposed to the majority of available techniques which are usually based on the decision-tree approach in the PCM algorithm all spectral, angular and ancillary information is used in a single step to retrieve probability estimates from the precomputed look-up tables (LUTs). Moreover, the issue of derivation of a single threshold value for a spectral test was overcome by the concept of multidimensional information space which is divided into small bins by an extensive set of intervals. The discrimination between snow and ice clouds and detection of broken, thin clouds was enhanced by means of the invariant coordinate system (ICS) transformation. The study area covers a wide range of environmental conditions spanning from Iceland through central Europe to northern parts of Africa which exhibit diverse difficulties for cloud/snow masking algorithms. The retrieved PCM cloud classification was compared to the Polar Platform System (PPS) version 2012 and Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 cloud masks, SYNOP (surface synoptic observations) weather reports, Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) vertical feature mask version 3 and to MODIS collection 5 snow mask. The outcomes of conducted analyses proved fine detection skills of the PCM method with results comparable to or better than the reference PPS algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim. This study was focused on (i) detection of specific BVDV-antibodies within selected cattle farms, (ii) identification of persistently infected (PI) animals and (iii) genetic typing of selected BVDV isolates. Methods. RNA extraction, real-time polymerase chain reaction, ELISA technique, sequencing. Results. Specific BVDV-antibodies were detected in 713 of 1,059 analyzed samples (67.3 per cent). This number is in agreement with findings in many cattle herds around the world. However, the number of positive samples differed in the herds. While 57 samples out of 283 (20.1 per cent) were identified in the first herd, 400 out of 475 (84.2 per cent) and 256 out of 301 (85 per cent) animals were positive in the second and third herd. High number of animals with BVDV RNA was detected in all herds. The real-time PCR assay detected BVDV RNA in 5 of 1068 samples analyzed (0.5 per cent). 4 positive samples out of 490 (0.8 per cent) and 1 out of 301 (0.33 per cent) were found in the second and third herd. The genetic materials of BVDV were not found in the first herd. Data on the number of PI animals were in accord with serological findings in the cattle herds involved in our study. The genetic typing of viral isolates revealed that only BVDV, Type 1 viruses were present. The hylogenetic analysis confirmed two BVDV-1 subtypes, namely b and f and revealed that all 4 viruses from the second farm were typed as BVDV-1b and all of them were absolutely identical in 5’-UTR, but virus from the third farm was typed as BVDV-1f. Conclusion. Our results indicated that the BVDV infection is widespread in cattle herds in the eastern Ukraine, that requires further research and development of new approaches to improve the current situation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present observations of total cloud cover and cloud type classification results from a sky camera network comprising four stations in Switzerland. In a comprehensive intercomparison study, records of total cloud cover from the sky camera, long-wave radiation observations, Meteosat, ceilometer, and visual observations were compared. Total cloud cover from the sky camera was in 65–85% of cases within ±1 okta with respect to the other methods. The sky camera overestimates cloudiness with respect to the other automatic techniques on average by up to 1.1 ± 2.8 oktas but underestimates it by 0.8 ± 1.9 oktas compared to the human observer. However, the bias depends on the cloudiness and therefore needs to be considered when records from various observational techniques are being homogenized. Cloud type classification was conducted using the k-Nearest Neighbor classifier in combination with a set of color and textural features. In addition, a radiative feature was introduced which improved the discrimination by up to 10%. The performance of the algorithm mainly depends on the atmospheric conditions, site-specific characteristics, the randomness of the selected images, and possible visual misclassifications: The mean success rate was 80–90% when the image only contained a single cloud class but dropped to 50–70% if the test images were completely randomly selected and multiple cloud classes occurred in the images.