907 resultados para Pure points of a measure
Resumo:
Objectives: Describe the main patterns in breastfeeding Measure some of the predictors of “ Measure some of the consequences of “ Introduce some useful statistical techniques
Resumo:
Introduction: Coronary magnetic resonance angiography (MRA) is a medical imaging technique that involves collecting data from consecutive heartbeats, always at the same time in the cardiac cycle, in order to minimize heart motion artifacts. This technique relies on the assumption that coronary arteries always follow the same trajectory from heartbeat to heartbeat. Until now, choosing the acquisition window in the cardiac cycle was based exclusively on the position of minimal coronary motion. The goal of this study was to test the hypothesis that there are time intervals during the cardiac cycle when coronary beat-to-beat repositioning is optimal. The repositioning uncertainty values in these time intervals were then compared with the intervals of low coronary motion in order to propose an optimal acquisition window for coronary MRA. Methods: Cine breath-hold x-ray angiograms with synchronous ECG were collected from 11 patients who underwent elective routine diagnostic coronarography. Twenty-three bifurcations of the left coronary artery were selected as markers to evaluate repositioning uncertainty and velocity during cardiac cycle. Each bifurcation was tracked by two observers, with the help of a user-assisted algorithm implemented in Matlab (The Mathworks, Natick, MA, USA) that compared the trajectories of the markers coming from consecutive heartbeats and computed the coronary repositioning uncertainty with steps of 50ms until 650ms after the R-wave. Repositioning uncertainty was defined as the diameter of the smallest circle encompassing the points to be compared at the same time after the R-wave. Student's t-tests with a false discovery rate (FDR, q=0.1) correction for multiple comparison were applied to see whether coronary repositioning and velocity vary statistically during cardiac cycle. Bland-Altman plots and linear regression were used to assess intra- and inter-observer agreement. Results: The analysis of left coronary artery beat-to-beat repositioning uncertainty shows a tendency to have better repositioning in mid systole (less than 0.84±0.58mm) and mid diastole (less than 0.89±0.6mm) than in the rest of the cardiac cycle (highest value at 50ms=1.35±0.64mm). According to Student's t-tests with FDR correction for multiple comparison (q=0.1), two intervals, in mid systole (150-200ms) and mid diastole (550-600ms), provide statistically better repositioning in comparison with the early systole and the early diastole. Coronary velocity analysis reveals that left coronary artery moves more slowly in end systole (14.35±11.35mm/s at 225ms) and mid diastole (11.78±11.62mm/s at 625ms) than in the rest of the cardiac cycle (highest value at 25ms: 55.96±22.34mm/s). This was confirmed by Student's t-tests with FDR correction for multiple comparison (q=0.1, FDR-corrected p-value=0.054): coronary velocity values at 225, 575 and 625ms are not much different between them but they are statistically inferior to all others. Bland-Altman plots and linear regression show that intra-observer agreement (y=0.97x+0.02 with R²=0.93 at 150ms) is better than inter-observer (y=0.8x+0.11 with R²=0.67 at 150ms). Discussion: The present study has demonstrated that there are two time intervals in the cardiac cycle, one in mid systole and one in mid diastole, where left coronary artery repositioning uncertainty reaches points of local minima. It has also been calculated that the velocity is the lowest in end systole and mid diastole. Since systole is less influenced by heart rate variability than diastole, it was finally proposed to test an acquisition window between 150 and 200ms after the R-wave.
Parts, places, and perspectives : a theory of spatial relations based an mereotopology and convexity
Resumo:
This thesis suggests to carry on the philosophical work begun in Casati's and Varzi's seminal book Parts and Places, by extending their general reflections on the basic formal structure of spatial representation beyond mereotopology and absolute location to the question of perspectives and perspective-dependent spatial relations. We show how, on the basis of a conceptual analysis of such notions as perspective and direction, a mereotopological theory with convexity can express perspectival spatial relations in a strictly qualitative framework. We start by introducing a particular mereotopological theory, AKGEMT, and argue that it constitutes an adequate core for a theory of spatial relations. Two features of AKGEMT are of particular importance: AKGEMT is an extensional mereotopology, implying that sameness of proper parts is a sufficient and necessary condition for identity, and it allows for (lower- dimensional) boundary elements in its domain of quantification. We then discuss an extension of AKGEMT, AKGEMTS, which results from the addition of a binary segment operator whose interpretation is that of a straight line segment between mereotopological points. Based on existing axiom systems in standard point-set topology, we propose an axiomatic characterisation of the segment operator and show that it is strong enough to sustain complex properties of a convexity predicate and a convex hull operator. We compare our segment-based characterisation of the convex hull to Cohn et al.'s axioms for the convex hull operator, arguing that our notion of convexity is significantly stronger. The discussion of AKGEMTS defines the background theory of spatial representation on which the developments in the second part of this thesis are built. The second part deals with perspectival spatial relations in two-dimensional space, i.e., such relations as those expressed by 'in front of, 'behind', 'to the left/right of, etc., and develops a qualitative formalism for perspectival relations within the framework of AKGEMTS. Two main claims are defended in part 2: That perspectival relations in two-dimensional space are four- place relations of the kind R(x, y, z, w), to be read as x is i?-related to y as z looks at w; and that these four-place structures can be satisfactorily expressed within the qualitative theory AKGEMTS. To defend these two claims, we start by arguing for a unified account of perspectival relations, thus rejecting the traditional distinction between 'relative' and 'intrinsic' perspectival relations. We present a formal theory of perspectival relations in the framework of AKGEMTS, deploying the idea that perspectival relations in two-dimensional space are four-place relations, having a locational and a perspectival part and show how this four-place structure leads to a unified framework of perspectival relations. Finally, we present a philosophical motivation to the idea that perspectival relations are four-place, cashing out the thesis that perspectives are vectorial properties and argue that vectorial properties are relations between spatial entities. Using Fine's notion of "qua objects" for an analysis of points of view, we show at last how our four-place approach to perspectival relations compares to more traditional understandings.
Resumo:
Gene therapy approaches using recombinant adeno-associated virus serotype 2 (rAAV2) and serotype 8 (rAAV8) have achieved significant clinical benefits. The generation of rAAV Reference Standard Materials (RSM) is key to providing points of reference for particle titer, vector genome titer, and infectious titer for gene transfer vectors. Following the example of the rAAV2RSM, here we have generated and characterized a novel RSM based on rAAV serotype 8. The rAAV8RSM was produced using transient transfection, and the purification was based on density gradient ultracentrifugation. The rAAV8RSM was distributed for characterization along with standard assay protocols to 16 laboratories worldwide. Mean titers and 95% confidence intervals were determined for capsid particles (mean, 5.50×10(11) pt/ml; CI, 4.26×10(11) to 6.75×10(11) pt/ml), vector genomes (mean, 5.75×10(11) vg/ml; CI, 3.05×10(11) to 1.09×10(12) vg/ml), and infectious units (mean, 1.26×10(9) IU/ml; CI, 6.46×10(8) to 2.51×10(9) IU/ml). Notably, there was a significant degree of variation between institutions for each assay despite the relatively tight correlation of assay results within an institution. This outcome emphasizes the need to use RSMs to calibrate the titers of rAAV vectors in preclinical and clinical studies at a time when the field is maturing rapidly. The rAAV8RSM has been deposited at the American Type Culture Collection (VR-1816) and is available to the scientific community.
Resumo:
Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed
Resumo:
Aim This study compares the direct, macroecological approach (MEM) for modelling species richness (SR) with the more recent approach of stacking predictions from individual species distributions (S-SDM). We implemented both approaches on the same dataset and discuss their respective theoretical assumptions, strengths and drawbacks. We also tested how both approaches performed in reproducing observed patterns of SR along an elevational gradient.Location Two study areas in the Alps of Switzerland.Methods We implemented MEM by relating the species counts to environmental predictors with statistical models, assuming a Poisson distribution. S-SDM was implemented by modelling each species distribution individually and then stacking the obtained prediction maps in three different ways - summing binary predictions, summing random draws of binomial trials and summing predicted probabilities - to obtain a final species count.Results The direct MEM approach yields nearly unbiased predictions centred around the observed mean values, but with a lower correlation between predictions and observations, than that achieved by the S-SDM approaches. This method also cannot provide any information on species identity and, thus, community composition. It does, however, accurately reproduce the hump-shaped pattern of SR observed along the elevational gradient. The S-SDM approach summing binary maps can predict individual species and thus communities, but tends to overpredict SR. The two other S-SDM approaches the summed binomial trials based on predicted probabilities and summed predicted probabilities - do not overpredict richness, but they predict many competing end points of assembly or they lose the individual species predictions, respectively. Furthermore, all S-SDM approaches fail to appropriately reproduce the observed hump-shaped patterns of SR along the elevational gradient.Main conclusions Macroecological approach and S-SDM have complementary strengths. We suggest that both could be used in combination to obtain better SR predictions by following the suggestion of constraining S-SDM by MEM predictions.
Resumo:
The shortest tube of constant diameter that can form a given knot represents the 'ideal' form of the knot. Ideal knots provide an irreducible representation of the knot, and they have some intriguing mathematical and physical features, including a direct correspondence with the time-averaged shapes of knotted DNA molecules in solution. Here we describe the properties of ideal forms of composite knots-knots obtained by the sequential tying of two or more independent knots (called factor knots) on the same string. We find that the writhe (related to the handedness of crossing points) of composite knots is the sum of that of the ideal forms of the factor knots. By comparing ideal composite knots with simulated configurations of knotted, thermally fluctuating DNA, we conclude that the additivity of writhe applies also to randomly distorted configurations of composite knots and their corresponding factor knots. We show that composite knots with several factor knots may possess distinct structural isomers that can be interconverted only by loosening the knot.
Resumo:
Background:¦Hirschsprung's disease (HSCR) is a congenital malformation of the enteric nervous system due to the¦arrest of migration of neural crest cells to form the myenteric and submucosal plexuses. It leads to an anganglionic intestinal segment, which is permanently contracted causing intestinal obstruction. Its incidence is approximately 1/5000 birth, and males are more frequently affected with a male/female ratio of 4/1. The diagnosis is in most cases made within the first year of life. The rectal biopsy of the mucosa and sub-mucosa is the diagnostic gold standard.¦Purpose:¦The aim of this study was to compare two surgical approaches for HSCR, the Duhamel technique and the transanal endorectal pull-through (TEPT) in term of indications, duration of surgery, duration of hospital stay, postoperative treatment, complications, frequency of enterocolitis and functional outcomes.¦Methods:¦Fifty-nine patients were treated for HSCR by one of the two methods in our department of pediatric¦surgery between 1994 and 2010. These patients were separated into two groups (I: Duhamel, II: TEPT), which were compared on the basis of medical records. Statistics were made to compare the two groups (ANOVA test). The first group includes 43 patients and the second 16 patients. It is noteworthy that twenty-four patients (about 41% of all¦patients) were referred from abroad (Western Africa). Continence was evaluated with the Krickenbeck's score.¦Results:¦Statistically, this study showed that operation duration, hospital stay, postoperative fasting and duration of postoperative antibiotics were significantly shorter (p value < 0.05) in group II (TEPT). But age at operation and length of aganglionic segment showed no significant difference between the two groups. The continence follow-up showed generally good results (Krickenbeck's scores 1; 2.1; 3.1) in both groups with a slight tendency to constipation in group I and soiling in group II.¦Conclusion:¦We found two indications for the Duhamel method that are being referred from a country without¦careful postoperative surveillance and/or having a previous colostomy. Even if the Duhamel technique tends to be replaced by the TEPT, it remains the best operative approach for some selected patients. TEPT has also proved some advantages but must be followed carefully because, among other points, of the postoperative dilatations. Our postoperative standards, like digital rectal examination and anal dilatations seem to reduce the occurrence of complications like rectal spur and anal/anastomosis stenosis, respectively in the Duhamel method and the TEPT technique.
Resumo:
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = (3 In 2)/(8) approximate to 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance p apart and p is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of p. Our simulation result shows that the model in fact works very well for the entire range of p. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.
Resumo:
A mesura que el suport del creixement econòmic constitueix un objectiu fonamental de la formulació de polítiques econòmiques, cal assenyalar que aquest tipus de creixement està limitat naturalment per un planeta finit. Aquest article argumenta que, des del punt de vista de la justícia intergeneracional, la realització d'un concepte de desmaterialització i, com a efecte, d'una economia que no creix (en el sentit de dissociació absoluta del creixement econòmic i consum d'energia i materials) es pot justificar. Per tant, el creixement pot ser també entesa com la millora de la qualitat de vida sobretot en comptes d'ampliar quantitats escarpats de sortida. Per tant, una dràstica reducció del cabal de material es necessita, sobretot en els països d'alts ingressos. Després de presentar alguns crítica de les propostes, en el focus d'aquest article es dibuixen en els arguments de per què la política econòmica en el futur han de ser etiquetats com "ecològic" i, a continuació, les opcions de posar en acció les idees del teòric presentat marc en tasques manejables polítiques seran discutides. En aquest cas, s'argumentarà que l'enfocament clàssic de internalització d'efectes externs sovint seguides de decisions de política econòmica ortodoxa no és completament capaç de reflectir canvis ecològics en les estructures de preus dels mercats. Per tant, formal (industrial i l'establiment de la política de consum) i institucions informals (llars) representen punts clau de la política econòmica sostenible, assenyalant l'individu com així com la responsabilitat col · lectiva per omplir aquest buit substancial.
Resumo:
PURPOSE: Early assessment of radiotherapy (RT) quality in the ongoing EORTC trial comparing primary temozolomide versus RT in low-grade gliomas. MATERIALS AND METHODS: RT plans provided for dummy cases were evaluated and compared against expert plans. We analysed: (1) tumour and organs-at-risk delineation, (2) geometric and dosimetric characteristics, (3) planning parameters, compliance with dose prescription and Dmax for OAR (4) indices: RTOG conformity index (CI), coverage factor (CF), tissue protection factor (PF); conformity number (CN = PF x CF); dose homogeneity in PTV (U). RESULTS: Forty-one RT plans were evaluated. Only two (5%) centres were requested to repeat CTV-PTV delineations. Three (7%) plans had a significant under-dosage and dose homogeneity in one deviated > 10%. Dose distribution was good with mean values of 1.5, 1, 0.68, and 0.68 (ideal values = 1) for CI, CF, PF, and CN, respectively. CI and CN strongly correlated with PF and they correlated with PTV. Planning with more beams seems to increase PTV(Dmin), improving CF. U correlated with PTV(Dmax). CONCLUSION: Preliminary results of the dummy run procedure indicate that most centres conformed to protocol requirements. To quantify plan quality we recommend systematic calculation of U and either CI or CN, both of which measure the amount of irradiated normal brain tissue.
Resumo:
We present a method for analyzing the curvature (second derivatives) of the conical intersection hyperline at an optimized critical point. Our method uses the projected Hessians of the degenerate states after elimination of the two branching space coordinates, and is equivalent to a frequency calculation on a single Born-Oppenheimer potential-energy surface. Based on the projected Hessians, we develop an equation for the energy as a function of a set of curvilinear coordinates where the degeneracy is preserved to second order (i.e., the conical intersection hyperline). The curvature of the potential-energy surface in these coordinates is the curvature of the conical intersection hyperline itself, and thus determines whether one has a minimum or saddle point on the hyperline. The equation used to classify optimized conical intersection points depends in a simple way on the first- and second-order degeneracy splittings calculated at these points. As an example, for fulvene, we show that the two optimized conical intersection points of C2v symmetry are saddle points on the intersection hyperline. Accordingly, there are further intersection points of lower energy, and one of C2 symmetry - presented here for the first time - is found to be the global minimum in the intersection space
Resumo:
In the past, sensors networks in cities have been limited to fixed sensors, embedded in particular locations, under centralised control. Today, new applications can leverage wireless devices and use them as sensors to create aggregated information. In this paper, we show that the emerging patterns unveiled through the analysis of large sets of aggregated digital footprints can provide novel insights into how people experience the city and into some of the drivers behind these emerging patterns. We particularly explore the capacity to quantify the evolution of the attractiveness of urban space with a case study of in the area of the New York City Waterfalls, a public art project of four man-made waterfalls rising from the New York Harbor. Methods to study the impact of an event of this nature are traditionally based on the collection of static information such as surveys and ticket-based people counts, which allow to generate estimates about visitors’ presence in specific areas over time. In contrast, our contribution makes use of the dynamic data that visitors generate, such as the density and distribution of aggregate phone calls and photos taken in different areas of interest and over time. Our analysis provides novel ways to quantify the impact of a public event on the distribution of visitors and on the evolution of the attractiveness of the points of interest in proximity. This information has potential uses for local authorities, researchers, as well as service providers such as mobile network operators.
Resumo:
OBJECTIVE: Evaluation of a French translation of the Addiction Severity Index (ASI) in 100 (78 male) alcoholic patients. METHOD: Validity of the instrument was assessed by measuring test-retest and interrater reliability, internal consistency and convergence and discrimination between items and scales. Concurrent validity was assessed by comparing the scores from the ASI with those obtained from three other clinimetric instruments. RESULTS: Test-retest reliability of ASI scores (after a 10-day interval) was good (r = 0.63 to r = 0.95). Interrater reliability was evaluated using six video recordings of patient interviews. Severity ratings assigned by six rates were significantly different (p < .05), but 72% of the ratings assigned by those who viewed the videos were within two points of the interviewer's severity ratings. Cronbach alpha coefficient of internal consistency varied from 0.58 to 0.81 across scales. The average item-to-scale convergent validity (r value) was 0.49 (range 0.0 to 0.84) for composite scores and 0.35 (range 0.00 to 0.68) for severity ratings, whereas discriminant validity was 0.11 on average (range-0.19 to 0.46) for composite scores and 0.12 (range-0.20 to 0.52) for severity ratings. Finally, concurrent validity with the following instruments was assessed: Severity of Alcoholism Dependence Questionnaire (40% shared variance with ASI alcohol scale), Michigan Alcoholism Screening Test (2% shared variance with ASI alcohol scale) and Hamilton Depression Rating Scale (31% shared variance with ASI psychiatric scale). CONCLUSIONS: The Addiction Severity Index covers a large scope of problems encountered among alcoholics and quantifies need for treatment. This French version presents acceptable criteria of reliability and validity.
Resumo:
The efficacy of social care, publicly and universally provided, has been contested from two different points of view. First, advocates of targeting social policy criticized the Matthew’s effect of universal provision and; second, theories arguing in favour of heterogeneous rationalities between men and women and, even different preferences among women, predict that universal provision of services is limiting women’s choices more than home allowances. The author tests both hypotheses and concludes that, at least in the case of adult care, women’s choices are significantly affected by women’s social positions and by the availability of public services. Furthermore, targeting through means-test eligibility criteria has no significant effect on inequality but, confirming the redistributive paradox, reduces women’s options.