905 resultados para Eigensystem realization algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: HIV surveillance requires monitoring of new HIV diagnoses and differentiation of incident and older infections. In 2008, Switzerland implemented a system for monitoring incident HIV infections based on the results of a line immunoassay (Inno-Lia) mandatorily conducted for HIV confirmation and type differentiation (HIV-1, HIV-2) of all newly diagnosed patients. Based on this system, we assessed the proportion of incident HIV infection among newly diagnosed cases in Switzerland during 2008-2013. METHODS AND RESULTS: Inno-Lia antibody reaction patterns recorded in anonymous HIV notifications to the federal health authority were classified by 10 published algorithms into incident (up to 12 months) or older infections. Utilizing these data, annual incident infection estimates were obtained in two ways, (i) based on the diagnostic performance of the algorithms and utilizing the relationship 'incident = true incident + false incident', (ii) based on the window-periods of the algorithms and utilizing the relationship 'Prevalence = Incidence x Duration'. From 2008-2013, 3'851 HIV notifications were received. Adult HIV-1 infections amounted to 3'809 cases, and 3'636 of them (95.5%) contained Inno-Lia data. Incident infection totals calculated were similar for the performance- and window-based methods, amounting on average to 1'755 (95% confidence interval, 1588-1923) and 1'790 cases (95% CI, 1679-1900), respectively. More than half of these were among men who had sex with men. Both methods showed a continuous decline of annual incident infections 2008-2013, totaling -59.5% and -50.2%, respectively. The decline of incident infections continued even in 2012, when a 15% increase in HIV notifications had been observed. This increase was entirely due to older infections. Overall declines 2008-2013 were of similar extent among the major transmission groups. CONCLUSIONS: Inno-Lia based incident HIV-1 infection surveillance proved useful and reliable. It represents a free, additional public health benefit of the use of this relatively costly test for HIV confirmation and type differentiation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background. Diabetes places a significant burden on the health care system. Reduction in blood glucose levels (HbA1c) reduces the risk of complications; however, little is known about the impact of disease management programs on medical costs for patients with diabetes. In 2001, economic costs associated with diabetes totaled $100 billion, and indirect costs totaled $54 billion. ^ Objective. To compare outcomes of nurse case management by treatment algorithms with conventional primary care for glycemic control and cardiovascular risk factors in type 2 diabetic patients in a low-income Mexican American community-based setting, and to compare the cost effectiveness of the two programs. Patient compliance was also assessed. ^ Research design and methods. An observational group-comparison to evaluate a treatment intervention for type 2 diabetes management was implemented at three out-patient health facilities in San Antonio, Texas. All eligible type 2 diabetic patients attending the clinics during 1994–1996 became part of the study. Data were obtained from the study database, medical records, hospital accounting, and pharmacy cost lists, and entered into a computerized database. Three groups were compared: a Community Clinic Nurse Case Manager (CC-TA) following treatment algorithms, a University Clinic Nurse Case Manager (UC-TA) following treatment algorithms, and Primary Care Physicians (PCP) following conventional care practices at a Family Practice Clinic. The algorithms provided a disease management model specifically for hyperglycemia, dyslipidemia, hypertension, and microalbuminuria that progressively moved the patient toward ideal goals through adjustments in medication, self-monitoring of blood glucose, meal planning, and reinforcement of diet and exercise. Cost effectiveness of hemoglobin AI, final endpoints was compared. ^ Results. There were 358 patients analyzed: 106 patients in CC-TA, 170 patients in UC-TA, and 82 patients in PCP groups. Change in hemoglobin A1c (HbA1c) was the primary outcome measured. HbA1c results were presented at baseline, 6 and 12 months for CC-TA (10.4%, 7.1%, 7.3%), UC-TA (10.5%, 7.1%, 7.2%), and PCP (10.0%, 8.5%, 8.7%). Mean patient compliance was 81%. Levels of cost effectiveness were significantly different between clinics. ^ Conclusion. Nurse case management with treatment algorithms significantly improved glycemic control in patients with type 2 diabetes, and was more cost effective. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Digital terrain models (DTM) typically contain large numbers of postings, from hundreds of thousands to billions. Many algorithms that run on DTMs require topological knowledge of the postings, such as finding nearest neighbors, finding the posting closest to a chosen location, etc. If the postings are arranged irregu- larly, topological information is costly to compute and to store. This paper offers a practical approach to organizing and searching irregularly-space data sets by presenting a collection of efficient algorithms (O(N),O(lgN)) that compute important topological relationships with only a simple supporting data structure. These relationships include finding the postings within a window, locating the posting nearest a point of interest, finding the neighborhood of postings nearest a point of interest, and ordering the neighborhood counter-clockwise. These algorithms depend only on two sorted arrays of two-element tuples, holding a planimetric coordinate and an integer identification number indicating which posting the coordinate belongs to. There is one array for each planimetric coordinate (eastings and northings). These two arrays cost minimal overhead to create and store but permit the data to remain arranged irregularly.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To estimate the kinematics of the SIRGAS reference frame, the Deutsches Geodätisches Forschungsinstitut (DGFI) as the IGS Regional Network Associate Analysis Centre for SIRGAS (IGS RNNAC SIR), yearly computes a cumulative (multi-year) solution containing all available weekly solutions delivered by the SIRGAS analysis centres. These cumulative solutions include those models, standards, and strategies widely applied at the time in which they were computed and cover different time spans depending on the availability of the weekly solutions. This data set corresponds to the multi-year solution SIR11P01. It is based on the combination of the weekly normal equations covering the time span from 2000-01-02 (GPS week 1043) to 2011-04-16 (GPS week 1631), when the IGS08 reference frame was introduced. It refers to ITRF2008, epoch 2005.0 and contains 230 stations with 269 occupations. Its precision was estimated to be ±1.0 mm (horizontal) and ±2.4 mm (vertical) for the station positions, and ±0.7 mm/a (horizontal) and ±1.1 mm/a (vertical) for the constant velocities. Computation strategy and results are in detail described in Sánchez and Seitz (2011). The IGS RNAAC SIR computation of the SIRGAS reference frame is possible thanks to the active participation of many Latin American and Caribbean colleagues, who not only make the measurements of the stations available, but also operate SIRGAS analysis centres processing the observational data on a routine basis (more details in http://www.sirgas.org). The achievements of SIRGAS are a consequence of a successful international geodetic cooperation not only following and meeting concrete objectives, but also becoming a permanent and self-sustaining geodetic community to guarantee quality, reliability, and long-term stability of the SIRGAS reference frame. The SIRGAS activities are strongly supported by the International Association of Geodesy (IAG) and the Pan-American Institute for Geography and History (PAIGH). The IGS RNAAC SIR highly appreciates all this support.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An action of modelling of the Territorial Intelligence Community Systems or TICS began in 2009 at the end of the CaEnti project. It has several objectives: - Establish a set of documents understandable by computer specialists who are in charge of software developments, and by territorial intelligence specialists. - Lay the foundation of a vocabulary describing the main notions of TICS domain. - Ensure the evolution and sustainability of tools and systems, in a highly scalable research context. The definition of models representing the data manipulated by the tools of the suitcase Catalyse is not sufficient to describe in a complete way the TICS domain. We established a correspondence between this computer vocabulary and vocabulary related to the theme to allow communication between computer scientists and territorial intelligence specialists. Furthermore it is necessary to describe the roles of TICS. For that it is interesting to use other kinds of computing models. In this communication we present the modelling of TICS project with business process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An action of modelling of the Territorial Intelligence Community Systems or TICS began in 2009 at the end of the CaEnti project. It has several objectives: - Establish a set of documents understandable by computer specialists who are in charge of software developments, and by territorial intelligence specialists. - Lay the foundation of a vocabulary describing the main notions of TICS domain. - Ensure the evolution and sustainability of tools and systems, in a highly scalable research context. The definition of models representing the data manipulated by the tools of the suitcase Catalyse is not sufficient to describe in a complete way the TICS domain. We established a correspondence between this computer vocabulary and vocabulary related to the theme to allow communication between computer scientists and territorial intelligence specialists. Furthermore it is necessary to describe the roles of TICS. For that it is interesting to use other kinds of computing models. In this communication we present the modelling of TICS project with business process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An action of modelling of the Territorial Intelligence Community Systems or TICS began in 2009 at the end of the CaEnti project. It has several objectives: - Establish a set of documents understandable by computer specialists who are in charge of software developments, and by territorial intelligence specialists. - Lay the foundation of a vocabulary describing the main notions of TICS domain. - Ensure the evolution and sustainability of tools and systems, in a highly scalable research context. The definition of models representing the data manipulated by the tools of the suitcase Catalyse is not sufficient to describe in a complete way the TICS domain. We established a correspondence between this computer vocabulary and vocabulary related to the theme to allow communication between computer scientists and territorial intelligence specialists. Furthermore it is necessary to describe the roles of TICS. For that it is interesting to use other kinds of computing models. In this communication we present the modelling of TICS project with business process

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The DTRF2008 is a realization of the International Terrestrial Reference System ITRS. The DTRF2008 consists of station positions and velocities of global distributed observing stations of the space geodetic observation techniques VLBI, SLR, GPS and DORIS. The DTRF2008 was released in May 2010 and includes the observation data of the techniques up to and including 2008. The observation data are processed and submitted by the corresponding international services: IGS (International GNSS Service, http://igscb.jpl.nasa.gov) IVS (International VLBI Service, http://ivscc.gsfc.nasa.gov) ILRS (International Laser Ranging Service, http://ilrs.gsfc.nasa.gov) IDS (International DORIS Service, http://ids-doris.org). The DTRF2008 is an independent ITRS realization, which is computed on the basis of the same input data as the ITRF2008 (IGN, Paris). Both realizations differ with respect to their computation strategies: while the ITRF2008 is based on the combination of solutions, the DTRF2008 is computed by the combination of normal equations. The DTRF2008 comprises the coordinates of 559 GPS-, 106 VLBI-, 122 SLR- and 132 DORIS-stations. The reference epoch is 1.1.2005, 0h UTC. The Earth Orientation Parameters (EOP) - that means the coordinates of the terrestrial and the celestial pole, UT1-UTC and the Length of Day (LOD) - were simultaneously estimated with the station coordinates. The EOP time series cover the period of 1983 to 2008. The station names are the official IERS indications: cdp numbers or 4-character IDs and DOMES numbers (http://itrf.ensg.ign.fr/doc_ITRF/iers_sta_list.txt). The solution is available in different file formats (SINEX and SSC), see below. A detailed description of the solution is given by Seitz M. et al. (2012). The results of a comparison of DTRF2008 and ITRF2008 is given by Seitz M. et al. (2013). More information as well as residual time series of the station positions can be made available by request.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The CoastColour project Round Robin (CCRR) project (http://www.coastcolour.org) funded by the European Space Agency (ESA) was designed to bring together a variety of reference datasets and to use these to test algorithms and assess their accuracy for retrieving water quality parameters. This information was then developed to help end-users of remote sensing products to select the most accurate algorithms for their coastal region. To facilitate this, an inter-comparison of the performance of algorithms for the retrieval of in-water properties over coastal waters was carried out. The comparison used three types of datasets on which ocean colour algorithms were tested. The description and comparison of the three datasets are the focus of this paper, and include the Medium Resolution Imaging Spectrometer (MERIS) Level 2 match-ups, in situ reflectance measurements and data generated by a radiative transfer model (HydroLight). The datasets mainly consisted of 6,484 marine reflectance associated with various geometrical (sensor viewing and solar angles) and sky conditions and water constituents: Total Suspended Matter (TSM) and Chlorophyll-a (CHL) concentrations, and the absorption of Coloured Dissolved Organic Matter (CDOM). Inherent optical properties were also provided in the simulated datasets (5,000 simulations) and from 3,054 match-up locations. The distributions of reflectance at selected MERIS bands and band ratios, CHL and TSM as a function of reflectance, from the three datasets are compared. Match-up and in situ sites where deviations occur are identified. The distribution of the three reflectance datasets are also compared to the simulated and in situ reflectances used previously by the International Ocean Colour Coordinating Group (IOCCG, 2006) for algorithm testing, showing a clear extension of the CCRR data which covers more turbid waters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the data structures and algorithms used in the approach for building domain ontologies from folksonomies and linked data. In this approach we extracts domain terms from folksonomies and enrich them with semantic information from the Linked Open Data cloud. As a result, we obtain a domain ontology that combines the emergent knowledge of social tagging systems with formal knowledge from Ontologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multiplicative and a semi-mechanistic, BWB-type [Ball, J.T., Woodrow, I.E., Berry, J.A., 1987. A model predicting stomatalconductance and its contribution to the control of photosynthesis under different environmental conditions. In: Biggens, J. (Ed.), Progress in Photosynthesis Research, vol. IV. Martinus Nijhoff, Dordrecht, pp. 221–224.] algorithm for calculating stomatalconductance (gs) at the leaf level have been parameterised for two crop and two tree species to test their use in regional scale ozone deposition modelling. The algorithms were tested against measured, site-specific data for durum wheat, grapevine, beech and birch of different European provenances. A direct comparison of both algorithms showed a similar performance in predicting hourly means and daily time-courses of gs, whereas the multiplicative algorithm outperformed the BWB-type algorithm in modelling seasonal time-courses due to the inclusion of a phenology function. The re-parameterisation of the algorithms for local conditions in order to validate ozone deposition modelling on a European scale reveals the higher input requirements of the BWB-type algorithm as compared to the multiplicative algorithm because of the need of the former to model net photosynthesis (An)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method for detecting microcalcifications in regions of interest (ROIs) extracted from digitized mammograms is proposed. The top-hat transform is a technique based on mathematical morphology operations and, in this paper, is used to perform contrast enhancement of the mi-crocalcifications. To improve microcalcification detection, a novel image sub-segmentation approach based on the possibilistic fuzzy c-means algorithm is used. From the original ROIs, window-based features, such as the mean and standard deviation, were extracted; these features were used as an input vector in a classifier. The classifier is based on an artificial neural network to identify patterns belonging to microcalcifications and healthy tissue. Our results show that the proposed method is a good alternative for automatically detecting microcalcifications, because this stage is an important part of early breast cancer detection