760 resultados para Measure of Noncompactness
Resumo:
Alzheimer's disease (AD) disrupts functional connectivity in distributed cortical networks. We analyzed changes in the S-estimator, a measure of multivariate intraregional synchronization, in electroencephalogram (EEG) source space in 15 mild AD patients versus 15 age-matched controls to evaluate its potential as a marker of AD progression. All participants underwent 2 clinical evaluations and 2 EEG recording sessions on diagnosis and after a year. The main effect of AD was hyposynchronization in the medial temporal and frontal regions and relative hypersynchronization in posterior cingulate, precuneus, cuneus, and parietotemporal cortices. However, the S-estimator did not change over time in either group. This result motivated an analysis of rapidly progressing AD versus slow-progressing patients. Rapidly progressing AD patients showed a significant reduction in synchronization with time, manifest in left frontotemporal cortex. Thus, the evolution of source EEG synchronization over time is correlated with the rate of disease progression and should be considered as a cost-effective AD biomarker.
Resumo:
Computational network analysis provides new methods to analyze the brain's structural organization based on diffusion imaging tractography data. Networks are characterized by global and local metrics that have recently given promising insights into diagnosis and the further understanding of psychiatric and neurologic disorders. Most of these metrics are based on the idea that information in a network flows along the shortest paths. In contrast to this notion, communicability is a broader measure of connectivity which assumes that information could flow along all possible paths between two nodes. In our work, the features of network metrics related to communicability were explored for the first time in the healthy structural brain network. In addition, the sensitivity of such metrics was analysed using simulated lesions to specific nodes and network connections. Results showed advantages of communicability over conventional metrics in detecting densely connected nodes as well as subsets of nodes vulnerable to lesions. In addition, communicability centrality was shown to be widely affected by the lesions and the changes were negatively correlated with the distance from lesion site. In summary, our analysis suggests that communicability metrics that may provide an insight into the integrative properties of the structural brain network and that these metrics may be useful for the analysis of brain networks in the presence of lesions. Nevertheless, the interpretation of communicability is not straightforward; hence these metrics should be used as a supplement to the more standard connectivity network metrics.
Resumo:
Soil penetration resistance (PR) is a measure of soil compaction closely related to soil structure and plant growth. However, the variability in PR hampers the statistical analyses. This study aimed to evaluate the variability of soil PR on the efficiency of parametric and nonparametric analyses in indentifying significant effects of soil compaction and to classify the coefficient of variation of PR into low, medium, high and very high. On six dates, the PR of a typical dystrophic Red Ultisol under continuous no-tillage for 16 years was measured. Three tillage and/or traffic conditions were established with the application of: (i) no chiseling or additional traffic, (ii) additional compaction, and (iii) chiseling. On each date, the nineteen PR data (measured at every 1.5 cm to a depth of 28.5 cm) were grouped in layers with different thickness. In each layer, the treatment effects were evaluated by variance (ANOVA) and Kruskal-Wallis analyses in a completely randomized design, and the coefficients of variation of all analyses were classified (low, intermediate, high and very high). The ANOVA performed better in discriminating the compaction effects, but the rejection rate of null hypothesis decreased from 100 to 80 % when the coefficient of variation increased from 15 to 26 %. The values of 15 and 26 % were the thresholds separating the low/intermediate and the high/very high coefficient variation classes of PR in this Ultisol.
Resumo:
This report summarizes research conducted at Iowa State University on behalf of the Iowa Department of Transportation, focusing on the volumetric state of hot-mix asphalt (HMA) mixtures as they transition from stable to unstable configurations. This has raditionally been addressed during mix design by meeting a minimum voids in the mineral aggregate (VMA) requirement, based solely upon the nominal maximum aggregate size without regard to other significant aggregate-related properties. The goal was to expand the current specification to include additional aggregate properties, e.g., fineness modulus, percent crushed fine and coarse aggregate, and their interactions. The work was accomplished in three phases: a literature review, extensive laboratory testing, and statistical analysis of test results. The literature review focused on the history and development of the current specification, laboratory methods of identifying critical mixtures, and the effects of other aggregate-related factors on critical mixtures. The laboratory testing involved three maximum aggregate sizes (19.0, 12.5, and 9.5 millimeters), three gradations (coarse, fine, and dense), and combinations of natural and manufactured coarse and fine aggregates. Specimens were compacted using the Superpave Gyratory Compactor (SGC), conventionally tested for bulk and maximum theoretical specific gravities and physically tested using the Nottingham Asphalt Tester (NAT) under a repeated load confined configuration to identify the transition state from sound to unsound. The statistical analysis involved using ANOVA and linear regression to examine the effects of identified aggregate factors on critical state transitions in asphalt paving mixtures and to develop predictive equations. The results clearly demonstrate that the volumetric conditions of an HMA mixture at the stable unstable threshold are influenced by a composite measure of the maximum aggregate size and gradation and by aggregate shape and texture. The currently defined VMA criterion, while significant, is seen to be insufficient by itself to correctly differentiate sound from unsound mixtures. Under current specifications, many otherwise sound mixtures are subject to rejection solely on the basis of failing to meet the VMA requirement. Based on the laboratory data and statistical analysis, a new paradigm to volumetric mix design is proposed that explicitly accounts for aggregate factors (gradation, shape, and texture).
Resumo:
The complex relationship between structural and functional connectivity, as measured by noninvasive imaging of the human brain, poses many unresolved challenges and open questions. Here, we apply analytic measures of network communication to the structural connectivity of the human brain and explore the capacity of these measures to predict resting-state functional connectivity across three independently acquired datasets. We focus on the layout of shortest paths across the network and on two communication measures-search information and path transitivity-which account for how these paths are embedded in the rest of the network. Search information is an existing measure of information needed to access or trace shortest paths; we introduce path transitivity to measure the density of local detours along the shortest path. We find that both search information and path transitivity predict the strength of functional connectivity among both connected and unconnected node pairs. They do so at levels that match or significantly exceed path length measures, Euclidean distance, as well as computational models of neural dynamics. This capacity suggests that dynamic couplings due to interactions among neural elements in brain networks are substantially influenced by the broader network context adjacent to the shortest communication pathways.
Resumo:
The physical quality of Amazonian soils is relatively unexplored, due to the unique characteristics of these soils. The index of soil physical quality is a widely accepted measure of the structural quality of soils and has been used to specify the structural quality of some tropical soils, as for example of the Cerrado ecoregion of Brazil. The research objective was to evaluate the physical quality index of an Amazonian dystrophic Oxisol under different management systems. Soils under five managements were sampled in Paragominas, State of Pará: 1) a 20-year-old second-growth forest (Forest); 2) Brachiaria sp pasture; 3) four years of no-tillage (NT4.); 4) eight years of no-tillage (NT8); and 5) two years of conventional tillage (CT2). The soil samples were evaluated for bulk density, macro and microporosity and for soil water retention. The physical quality index of the samples was calculated and the resulting value correlated with soil organic matter, bulk density and porosity. The surface layers of all systems were more compacted than those of the forest. The physical quality of the soil was best represented by the relations of the S index to bulk density and soil organic matter.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
Purpose: To compare the performance Glaucoma Quality of Life-15 (GQL-15) Questionnaire, intraocular pressure measurement (IOP Goldmann tonometry) and a measure of visual field loss using Moorfields Motion Displacement Test (MDT) in detecting glaucomatous eyes from a self referred population. Methods: The GQL-15 has been suggested to correlate with visual disability and psychophysical measures of visual function in glaucoma patients. The Moorfields MDT is a multi location perimetry test with 32 white line stimuli presented on a grey background on a standard laptop computer. Each stimulus is displaced between computer frames to give the illusion of "apparent motion". Participants (N=312, 90% older than 45 years; 20.5% family history of glaucoma) self referred to an advertised World Glaucoma Day (March 2009) Jules Gonin Eye Hospital, Lausanne Switzerland. Participants underwent a clinical exam (IOP, slit lamp, angle and disc examination by a general ophthalmologist), 90% completed a GQL-15 questionnaire and over 50% completed a MDT test in both eyes. Those who were classified as abnormal on one or more of the following (IOP >21 mmHg/ GQL-15 score >20/ MDT score >2/ clinical exam) underwent a follow up clinical examination by a glaucoma specialist including imaging and threshold perimetry. After the second examination subjects were classified as "healthy"(H), "glaucoma suspect" (GS) (ocular hypertension and/or suspicious disc, angle closure with SD) or "glaucomatous" (G). Results: One hundred and ten subjects completed all 4 initial examinations; of these 69 were referred to complete the 2nd examination and were classified as; 8 G, 24 GS, and 37 H. MDT detected 7/8 G, and 7/24 GS, with false referral rate of 3.8%. IOP detected 2/8 G and 8/24 GS, with false referral rate of 8.9%. GQL-15 detected 4/8 G, 16/24 GS with a false referral rate of 42%. Conclusions: In this sample of participants attending a self referral glaucoma detection event, the MDT performed significantly better than the GQL-15 and IOP in discriminating glaucomatous patients from healthy subjects. Further studies are required to assess the potential of the MDT as a glaucoma screening tool.
Resumo:
Convictions statistics were the first criminal statistics available in Europe during the nineteenth century. Their main weaknesses as crime measures and for comparative purposes were identified by Alphonse de Candolle in the 1830s. Currently, they are seldom used by comparative criminologists, although they provide a less valid but more reliable measure of crime and formal social control than police statistics. This article uses conviction statistics, compiled from the four editions of the European Sourcebook of Crime and Criminal Justice Statistics, to study the evolution of persons convicted in European countries from 1990 to 2006. Trends in persons convicted for six offences -intentional homicide, assault, rape, robbery, theft, and drug offences- and up to 26 European countries are analysed. These trends are established for the whole of Europe as well as for a cluster of Western European countries and a cluster of Central and Eastern European countries. The analyses show similarities between both regions of Europe at the beginning and at the end of the period under study. After a general increase of the rate of persons convicted in the early 1990s in the whole of Europe, trends followed different directions in Western and in Central and Eastern Europe. However, during the 2000s, it can be observed, throughout Europe, a certain stability of the rates of persons convicted for intentional homicides, accompanied by a general decrease of the rate of persons convicted for property offences, and an increase of the rate of those convicted for drug offences. The latter goes together with an increase of the rate of persons convicted for non lethal violent offences, which only reached some stability at the end of the time series. These trends show that there is no general crime drop in Europe. After a discussion of possible theoretical explanations, a multifactor model, inspired by opportunity-based theories, is proposed to explain the trends observed.
Resumo:
Naive scale invariance is not a true property of natural images. Natural monochrome images possess a much richer geometrical structure, which is particularly well described in terms of multiscaling relations. This means that the pixels of a given image can be decomposed into sets, the fractal components of the image, with well-defined scaling exponents [Turiel and Parga, Neural Comput. 12, 763 (2000)]. Here it is shown that hyperspectral representations of natural scenes also exhibit multiscaling properties, observing the same kind of behavior. A precise measure of the informational relevance of the fractal components is also given, and it is shown that there are important differences between the intrinsically redundant red-green-blue system and the decorrelated one defined in Ruderman, Cronin, and Chiao [J. Opt. Soc. Am. A 15, 2036 (1998)].
Resumo:
The Mehlich-1 (M-1) extractant and Monocalcium Phosphate in acetic acid (MCPa) have mechanisms for extraction of available P and S in acidity and in ligand exchange, whether of the sulfate of the extractant by the phosphate of the soil, or of the phosphate of the extractant by the sulfate of the soil. In clayey soils, with greater P adsorption capacity, or lower remaining P (Rem-P) value, which corresponds to soils with greater Phosphate Buffer Capacity (PBC), more buffered for acidity, the initially low pH of the extractants increases over their time of contact with the soil in the direction of the pH of the soil; and the sulfate of the M-1 or the phosphate of the MCPa is adsorbed by adsorption sites occupied by these anions or not. This situation makes the extractant lose its extraction capacity, a phenomenon known as loss of extraction capacity or consumption of the extractant, the object of this study. Twenty soil samples were chosen so as to cover the range of Rem-P (0 to 60 mg L-1). Rem-P was used as a measure of the PBC. The P and S contents available from the soil samples through M-1 and MCPa, and the contents of other nutrients and of organic matter were determined. For determination of loss of extraction capacity, after the rest period, the pH and the P and S contents were measured in both the extracts-soils. Although significant, the loss of extraction capacity of the acidity of the M-1 and MCPa extractants with reduction in the Rem-P value did not have a very expressive effect. A “linear plateau” model was observed for the M-1 for discontinuous loss of extraction capacity of the P content in accordance with reduction in the concentration of the Rem-P or increase in the PBC, suggesting that a discontinuous model should also be adopted for interpretation of available P of soils with different Rem-P values. In contrast, a continuous linear response was observed between the P variables in the extract-soil and Rem-P for the MCPa extractor, which shows increasing loss of extraction capacity of this extractor with an increase in the PBC of the soil, indicating the validity of the linear relationship between the available S of the soil and the PBC, estimated by Rem-P, as currently adopted.
Resumo:
Proper examination of the pupil provides an objective measure of the integrity of the pregeniculate afferent visual pathway and allows assessment of sympathetic and parasympathetic innervation to the eye. Infrared videography and pupillography are increasingly used to study the dynamic behavior of the pupil in common disorders, such as Horner's syndrome and tonic pupil.
Resumo:
Given a compact pseudo-metric space, we associate to it upper and lower dimensions, depending only on the pseudo-metric. Then we construct a doubling measure for which the measure of a dilated ball is closely related to these dimensions.
Resumo:
OBJECTIVES: In the absence of a gold standard, the assessment of physical activity in children remains difficult. To record physical activity with a pedometer and to examine to what extent it is correlated with VO2max. METHODS: Survey on physical activity and fitness; 233 Swiss adolescents aged 11 to 15 carried a pedometer (Pedoboy) during seven consecutive days. VO2max was estimated through an endurance shuttle run test. RESULTS: The physical activity recorded by the pedometer did not vary from one day to the other (p > 0.05). The physical activity was higher among boys than among girls (p < 0.001) and higher among younger adolescents (6th versus 8th grade; p < 0.001). The correlation between physical activity and estimated VO2max was 0.30 (p < 0.01). CONCLUSIONS: The use of a pedometer to assess physical activity over one entire week is feasible among adolescents. The record provided by the pedometer gives an objective measure of the usual physical activity and, as such, is relatively well correlated with aerobic capacity.
Resumo:
We examine the patterns formed by injecting nitrogen gas into the center of a horizontal, radial Hele-Shaw cell filled with paraffin oil. We use smooth plates and etched plates with lattices having different amounts of defects (010 %). In all cases, a quantitative measure of the pattern ramification shows a regular trend with injection rate and cell gap, such that the dimensionless perimeter scales with the dimensionless time. By adding defects to the lattice, we observe increased branching in the pattern morphologies. However, even in this case, the scaling behavior persists. Only the prefactor of the scaling function shows a dependence on the defect density. For different lattice defect densities, we examine the nature of the different morphology phases.