945 resultados para Point density analysis
Resumo:
BACKGROUND: Pharmacists may improve the clinical management of major risk factors for cardiovascular disease (CVD) prevention. A systematic review was conducted to determine the impact of pharmacist care on the management of CVD risk factors among outpatients. METHODS: The MEDLINE, EMBASE, CINAHL, and Cochrane Central Register of Controlled Trials databases were searched for randomized controlled trials that involved pharmacist care interventions among outpatients with CVD risk factors. Two reviewers independently abstracted data and classified pharmacists' interventions. Mean changes in blood pressure, total cholesterol, low-density lipoprotein cholesterol, and proportion of smokers were estimated using random effects models. RESULTS: Thirty randomized controlled trials (11 765 patients) were identified. Pharmacist interventions exclusively conducted by a pharmacist or implemented in collaboration with physicians or nurses included patient educational interventions, patient-reminder systems, measurement of CVD risk factors, medication management and feedback to physician, or educational intervention to health care professionals. Pharmacist care was associated with significant reductions in systolic/diastolic blood pressure (19 studies [10 479 patients]; -8.1 mm Hg [95% confidence interval {CI}, -10.2 to -5.9]/-3.8 mm Hg [95% CI,-5.3 to -2.3]); total cholesterol (9 studies [1121 patients]; -17.4 mg/L [95% CI,-25.5 to -9.2]), low-density lipoprotein cholesterol (7 studies [924 patients]; -13.4 mg/L [95% CI,-23.0 to -3.8]), and a reduction in the risk of smoking (2 studies [196 patients]; relative risk, 0.77 [95% CI, 0.67 to 0.89]). While most studies tended to favor pharmacist care compared with usual care, a substantial heterogeneity was observed. CONCLUSION: Pharmacist-directed care or in collaboration with physicians or nurses improve the management of major CVD risk factors in outpatients.
Resumo:
Various compositions of synthetic calcium phosphates (CaP) have been proposed and their use has considerably increased over the past decades. Besides differences in physico-chemical properties, resorption and osseointegration, artificial CaP bone graft might differ in their resistance against biofilm formation. We investigated standardised cylinders of 5 different CaP bone grafts (cyclOS, chronOS (both β-TCP (tricalcium phosphate)), dicalcium phosphate (DCP), calcium-deficient hydroxyapatite (CDHA) and α-TCP). Various physico-chemical characterisations e.g., geometrical density, porosity, and specific surface area were investigated. Biofilm formation was carried out in tryptic soy broth (TSB) and human serum (SE) using Staphylococcus aureus (ATCC 29213) and S. epidermidis RP62A (ATCC 35984). The amount of biofilm was analysed by an established protocol using sonication and microcalorimetry. Physico-chemical characterisation showed marked differences concerning macro- and micropore size, specific surface area and porosity accessible to bacteria between the 5 scaffolds. Biofilm formation was found on all scaffolds and was comparable for α-TCP, chronOS, CDHA and DCP at corresponding time points when the scaffolds were incubated with the same germ and/or growth media, but much lower for cyclOS. This is peculiar because cyclOS had an intermediate porosity, mean pore size, specific surface area, and porosity accessible to bacteria. Our results suggest that biofilm formation is not influenced by a single physico-chemical parameter alone but is a multi-step process influenced by several factors in parallel. Transfer from in vitro data to clinical situations is difficult; thus, advocating the use of cyclOS scaffolds over the four other CaP bone grafts in clinical situations with a high risk of infection cannot be clearly supported based on our data.
Resumo:
Tone Mapping is the problem of compressing the range of a High-Dynamic Range image so that it can be displayed in a Low-Dynamic Range screen, without losing or introducing novel details: The final image should produce in the observer a sensation as close as possible to the perception produced by the real-world scene. We propose a tone mapping operator with two stages. The first stage is a global method that implements visual adaptation, based on experiments on human perception, in particular we point out the importance of cone saturation. The second stage performs local contrast enhancement, based on a variational model inspired by color vision phenomenology. We evaluate this method with a metric validated by psychophysical experiments and, in terms of this metric, our method compares very well with the state of the art.
Resumo:
In the past, sensors networks in cities have been limited to fixed sensors, embedded in particular locations, under centralised control. Today, new applications can leverage wireless devices and use them as sensors to create aggregated information. In this paper, we show that the emerging patterns unveiled through the analysis of large sets of aggregated digital footprints can provide novel insights into how people experience the city and into some of the drivers behind these emerging patterns. We particularly explore the capacity to quantify the evolution of the attractiveness of urban space with a case study of in the area of the New York City Waterfalls, a public art project of four man-made waterfalls rising from the New York Harbor. Methods to study the impact of an event of this nature are traditionally based on the collection of static information such as surveys and ticket-based people counts, which allow to generate estimates about visitors’ presence in specific areas over time. In contrast, our contribution makes use of the dynamic data that visitors generate, such as the density and distribution of aggregate phone calls and photos taken in different areas of interest and over time. Our analysis provides novel ways to quantify the impact of a public event on the distribution of visitors and on the evolution of the attractiveness of the points of interest in proximity. This information has potential uses for local authorities, researchers, as well as service providers such as mobile network operators.
Resumo:
This paper presents a technique to estimate and model patient-specific pulsatility of cerebral aneurysms over onecardiac cycle, using 3D rotational X-ray angiography (3DRA) acquisitions. Aneurysm pulsation is modeled as a time varying-spline tensor field representing the deformation applied to a reference volume image, thus producing the instantaneousmorphology at each time point in the cardiac cycle. The estimated deformation is obtained by matching multiple simulated projections of the deforming volume to their corresponding original projections. A weighting scheme is introduced to account for the relevance of each original projection for the selected time point. The wide coverage of the projections, together with the weighting scheme, ensures motion consistency in all directions. The technique has been tested on digital and physical phantoms that are realistic and clinically relevant in terms of geometry, pulsation and imaging conditions. Results from digital phantomexperiments demonstrate that the proposed technique is able to recover subvoxel pulsation with an error lower than 10% of the maximum pulsation in most cases. The experiments with the physical phantom allowed demonstrating the feasibility of pulsation estimation as well as identifying different pulsation regions under clinical conditions.
Resumo:
We study the minimum mean square error (MMSE) and the multiuser efficiency η of large dynamic multiple access communication systems in which optimal multiuser detection is performed at the receiver as the number and the identities of active users is allowed to change at each transmission time. The system dynamics are ruled by a Markov model describing the evolution of the channel occupancy and a large-system analysis is performed when the number of observations grow large. Starting on the equivalent scalar channel and the fixed-point equation tying multiuser efficiency and MMSE, we extend it to the case of a dynamic channel, and derive lower and upper bounds for the MMSE (and, thus, for η as well) holding true in the limit of large signal–to–noise ratios and increasingly large observation time T.
Resumo:
The present paper is aimed at identifying what are the effects of the Point System of Selection of immigrants in Quebec. I defend that the distribution of points results in a different composition of immigrant stocks in terms of origin mix and not in terms of labour skills. To do so, I carry out a longitudinal descriptive analysis on the national composition of immigrants in Quebec and two other significant provinces (Ontario and British Columbia), as well as an analysis of the distribution of points in Quebec and in the rest of Canada.
Resumo:
RATIONALE The choice of containers for storage of aqueous samples between their collection, transport and water hydrogen (2H) and oxygen (18O) stable isotope analysis is a topic of concern for a wide range of fields in environmental, geological, biomedical, food, and forensic sciences. The transport and separation of water molecules during water vapor or liquid uptake by sorption or solution and the diffusive transport of water molecules through organic polymer material by permeation or pervaporation may entail an isotopic fractionation. An experiment was conducted to evaluate the extent of such fractionation. METHODS Sixteen bottle-like containers of eleven different organic polymers, including low and high density polyethylene (LDPE and HDPE), polypropylene (PP), polycarbonate (PC), polyethylene terephthalate (PET), and perfluoroalkoxy-Teflon (PFA), of different wall thickness and size were completely filled with the same mineral water and stored for 659?days under the same conditions of temperature and humidity. Particular care was exercised to keep the bottles tightly closed and prevent loss of water vapor through the seals. RESULTS Changes of up to +5 parts per thousand for d2H values and +2.0 parts per thousand for d18O values were measured for water after more than 1?year of storage within a plastic container, with the magnitude of change depending mainly on the type of organic polymer, wall thickness, and container size. The most important variations were measured for the PET and PC bottles. Waters stored in glass bottles with Polyseal (TM) cone-lined PP screw caps and thick-walled HDPE or PFA containers with linerless screw caps having an integrally molded inner sealing ring preserved their original d2H and d18O values. The carbon, hydrogen, and oxygen stable isotope compositions of the organic polymeric materials were also determined. CONCLUSIONS The results of this study clearly show that for precise and accurate measurements of the water stable isotope composition in aqueous solutions, rigorous sampling and storage procedures are needed both for laboratory standards and for unknown samples. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
Rock slope instabilities are implicitly linked to the supply of sediment and debris recharging channels prone to debris flow. Hence, the incorporation of bedrock structure and terrain morphology can be relevant in the analysis of sediment budget and debris flow hazard assessment. Here, the mode of debris production of the Manival catchment (northern French Alps) is documented by the study of its morphostructural aspects extracted from high resolution DEM. Terrain implication in the process of debris supply is evaluated by: a) A systematic classification of the major morphological units based on the slope gradient that enables a spatial analysis of zones of debris production and deposition. b) A detailed structural analysis performed on DEM in order to identify potential unstable slopes. c) An analysis of the gullies orientation that informs in term of structural control of the sources zones. d) Localisation of high density joints sets that document about whether sources of continuous debris production are controlled by the structural setting of the catchment. These DEM-based indicators can be used as proxies for assessing the influences of the current topography and enable to quantify a degree of susceptibility to mass wasting and hillslope erosion activity. This present contribution suggests some directions for characterizing sediment flux dynamic in small alpine catchment.
Resumo:
One of the disadvantages of old age is that there is more past than future: this,however, may be turned into an advantage if the wealth of experience and, hopefully,wisdom gained in the past can be reflected upon and throw some light on possiblefuture trends. To an extent, then, this talk is necessarily personal, certainly nostalgic,but also self critical and inquisitive about our understanding of the discipline ofstatistics. A number of almost philosophical themes will run through the talk: searchfor appropriate modelling in relation to the real problem envisaged, emphasis onsensible balances between simplicity and complexity, the relative roles of theory andpractice, the nature of communication of inferential ideas to the statistical layman, theinter-related roles of teaching, consultation and research. A list of keywords might be:identification of sample space and its mathematical structure, choices betweentransform and stay, the role of parametric modelling, the role of a sample spacemetric, the underused hypothesis lattice, the nature of compositional change,particularly in relation to the modelling of processes. While the main theme will berelevance to compositional data analysis we shall point to substantial implications forgeneral multivariate analysis arising from experience of the development ofcompositional data analysis…
Resumo:
BACKGROUND CONTEXT: Studies involving factor analysis (FA) of the items in the North American Spine Society (NASS) outcome assessment instrument have revealed inconsistent factor structures for the individual items. PURPOSE: This study examined whether the factor structure of the NASS varied in relation to the severity of the back/neck problem and differed from that originally recommended by the developers of the questionnaire, by analyzing data before and after surgery in a large series of patients undergoing lumbar or cervical disc arthroplasty. STUDY DESIGN/SETTING: Prospective multicenter observational case series. PATIENT SAMPLE: Three hundred ninety-one patients with low back pain and 553 patients with neck pain completed questionnaires preoperatively and again at 3 to 6 and 12 months follow-ups (FUs), in connection with the SWISSspine disc arthroplasty registry. OUTCOME MEASURES: North American Spine Society outcome assessment instrument. METHODS: First, an exploratory FA without a priori assumptions and subsequently a confirmatory FA were performed on the 17 items of the NASS-lumbar and 19 items of the NASS-cervical collected at each assessment time point. The item-loading invariance was tested in the German version of the questionnaire for baseline and FU. RESULTS: Both NASS-lumbar and NASS-cervical factor structures differed between baseline and postoperative data sets. The confirmatory analysis and item-loading invariance showed better fit for a three-factor (3F) structure for NASS-lumbar, containing items on "disability," "back pain," and "radiating pain, numbness, and weakness (leg/foot)" and for a 5F structure for NASS-cervical including disability, "neck pain," "radiating pain and numbness (arm/hand)," "weakness (arm/hand)," and "motor deficit (legs)." CONCLUSIONS: The best-fitting factor structure at both baseline and FU was selected for both the lumbar- and cervical-NASS questionnaires. It differed from that proposed by the originators of the NASS instruments. Although the NASS questionnaire represents a valid outcome measure for degenerative spine diseases, it is able to distinguish among all major symptom domains (factors) in patients undergoing lumbar and cervical disc arthroplasty; overall, the item structure could be improved. Any potential revision of the NASS should consider its factorial structure; factorial invariance over time should be aimed for, to allow for more precise interpretations of treatment success.
Resumo:
In spite of its relative importance in the economy of many countriesand its growing interrelationships with other sectors, agriculture has traditionally been excluded from accounting standards. Nevertheless, to support its Common Agricultural Policy, for years the European Commission has been making an effort to obtain standardized information on the financial performance and condition of farms. Through the Farm Accountancy Data Network (FADN), every year data are gathered from a rotating sample of 60.000 professional farms across all member states. FADN data collection is not structured as an accounting cycle but as an extensive questionnaire. This questionnaire refers to assets, liabilities, revenues and expenses, and seems to try to obtain a "true and fair view" of the financial performance and condition of the farms it surveys. However, the definitions used in the questionnaire and the way data is aggregated often appear flawed from an accounting perspective. The objective of this paper is to contrast the accounting principles implicit in the FADN questionnaire with generally accepted accounting principles, particularly those found in the IVth Directive of the European Union, on the one hand, and those recently proposed by the International Accounting Standards Committees Steering Committeeon Agriculture in its Draft Statement of Principles, on the other hand. There are two reasons why this is useful. First, it allows to make suggestions how the information provided by FADN could be more in accordance with the accepted accounting framework, and become a more valuable tool for policy makers, farmers, and other stakeholders. Second, it helps assessing the suitability of FADN to become the starting point for a European accounting standard on agriculture.
Resumo:
The dynamics of the control of Aedes (Stegomyia) aegypti Linnaeus, (Diptera, Culicidae) by Bacillus thuringiensis var israelensis has been related with the temperature, density and concentration of the insecticide. A mathematical model for biological control of Aedes aegypti with Bacillus thuringiensis var israelensis (Bti) was constructed by using data from the literature regarding the biology of the vector. The life cycle was described by differential equations. Lethal concentrations (LC50 and LC95) of Bti were determined in the laboratory under different experimental conditions. Temperature, colony, larvae density and bioinsecticide concentration presented marked differences in the analysis of the whole set of variables; although when analyzed individually, only the temperature and concentration showed changes. The simulations indicated an inverse relationship between temperature and mosquito population, nonetheless, faster growth of populations is reached at higher temperatures. As conclusion, the model suggests the use of integrated control strategies for immature and adult mosquitoes in order to achieve a reduction of Aedes aegypti.
Resumo:
Biogeographic studies dealing with Bombyliidae are rare in the literature and no information is available on its origin and early diversification. In this study, we found evidence from molecular phylogeny and from fossil record supporting a Middle Jurassic origin of the Bombylioidea, taken as a starting point to discuss the biogeography and diversification of Crocidiinae. Based on a previously published phylogenetic hypothesis, we performed a Brooks Parsimony Analysis (BPA) to discuss the biogeographical history of Crocidiinae lineages. This subfamily is mostly distributed over arid areas of the early components of the Gondwanaland: Chile and southern Africa, but also in southwestern Palaearctic and southwestern Nearctic. The vicariant events affecting the Crocidiinae biogeography at the generic level seems to be related to the sequential separation of a Laurasian clade from a Gondwanan clade followed by the splitting of the latter into smaller components. This also leads to a hypothesis of origin of the Crocidiinae in the Middle Jurassic, the same period in which other bombyliid lineages are supposed to have arisen and irradiated.
Resumo:
The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.