856 resultados para Global variance-based
Resumo:
Abstract Background The responsiveness of oral health-related quality of life (OHRQoL) instruments has become relevant, given the increasing tendency to use OHRQoL measures as outcomes in clinical trials and evaluations studies. The purpose of this study was to assess the responsiveness of the Brazilian Scale of Oral Health Outcomes for 5-year-old children (SOHO-5) to dental treatment. Methods One hundred and fifty-four children and their parents completed the child self- and parental’ reports of the SOHO-5 prior to treatment and 7 to 14 days after the completion of treatment. The post-treatment questionnaire also included a global transition judgment that assessed subject’s perceptions of change in their oral health following treatment. Change scores were calculated by subtracting post-treatment SOHO-5 scores from pre-treatment scores. Longitudinal construct validity was assessed by using one-way analysis of variance to examine the association between change scores and the global transition judgments. Measures of responsiveness included standardized effect sizes (ES) and standardized response mean (SRM). Results The improvement of children’s oral health after treatment are reflected in mean pre- and post-treatment SOHO-5 scores that declined from 2.67 to 0.61 (p < 0.001) for the child-self reports, and 4.04 to 0.71 (p < 0.001) for the parental reports. Mean change scores showed a gradient in the expected direction across categories of the global transition judgment, and there were significant differences in the pre- and post-treatment scores of those who reported improving a little (p < 0.05) and those who reported improving a lot (p < 0.001). For both versions, the ES and SRM based on change scores mean for total scores and for categories of global transitions judgments were moderate to large. Conclusions The Brazilian SOHO-5 is responsive to change and can be used as an outcome indicator in future clinical trials. Both the parental and the child versions presented satisfactory results.
Resumo:
ABSTRACT: Purpose: To describe a research-based global curriculum in speech-language pathology and audiology that is part of a funded cross-linguistic consortium among 2 U.S. and 2 Brazilian universities. Method: The need for a global curriculum in speechlanguage pathology and audiology is outlined, and different funding sources are identified to support development of a global curriculum. The U.S. Department of Education’s Fund for the Improvement of Post-Secondary Education (FIPSE), in conjunction with the Brazilian Ministry of Education (Fundacao Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior; CAPES), funded the establishment of a shared research curriculum project, “Consortium for Promoting Cross-Linguistic Understanding of Communication Disabilities in Children” for East Tennessee State University and the University of Northern Iowa and 2 Brazilian universities (Universidade Federal de Santa Maria and Universidade de São Paulo-Baurú). Results: The goals and objectives of the research-based global curriculum are summarized, and a description of an Internet-based course, “Different Languages, One World,” is provided Conclusion: Partnerships such as the FIPSE–CAPES consortium provide a foundation for training future generations of globally and research-prepared practitioners in speechlanguage pathology and audiology.
Resumo:
Classical Pavlovian fear conditioning to painful stimuli has provided the generally accepted view of a core system centered in the central amygdala to organize fear responses. Ethologically based models using other sources of threat likely to be expected in a natural environment, such as predators or aggressive dominant conspecifics, have challenged this concept of a unitary core circuit for fear processing. We discuss here what the ethologically based models have told us about the neural systems organizing fear responses. We explored the concept that parallel paths process different classes of threats, and that these different paths influence distinct regions in the periaqueductal gray - a critical element for the organization of all kinds of fear responses. Despite this parallel processing of different kinds of threats, we have discussed an interesting emerging view that common cortical-hippocampal-amygdalar paths seem to be engaged in fear conditioning to painful stimuli, to predators and, perhaps, to aggressive dominant conspecifics as well. Overall, the aim of this review is to bring into focus a more global and comprehensive view of the systems organizing fear responses.
Resumo:
Acute kidney injury (AKI) is classically described as a rapid loss of kidney function. AKI affects more than 15% of all hospital admissions and is associated with elevated mortality rates. Although many advances have occurred, intermittent or continuous renal replacement therapies are still considered the best options for reversing mild and severe AKI syndrome. For this reason, it is essential that innovative and effective therapies, without side effects and complications, be developed to treat AKI and the end-stages of renal disease. Mesenchymal stem cell (MSC) based therapies have numerous advantages in helping to repair inflamed and damaged tissues and are being considered as a new alternative for treating kidney injuries. Numerous experimental models have shown that MSCs can act via differentiation-independent mechanisms to help renal recovery. Essentially, MSCs can secrete a pool of cytokines, growth factors and chemokines, express enzymes, interact via cell-to-cell contacts and release bioagents such as microvesicles to orchestrate renal protection. In this review, we propose seven distinct properties of MSCs which explain how renoprotection may be conferred: 1) anti-inflammatory; 2) pro-angiogenic; 3) stimulation of endogenous progenitor cells; 4) anti-apoptotic; 5) anti-fibrotic; 6) anti-oxidant; and 7) promotion of cellular reprogramming. In this context, these mechanisms, either individually or synergically, could induce renal protection and functional recovery. This review summarises the most important effects and benefits associated with MSC-based therapies in experimental renal disease models and attempts to clarify the mechanisms behind the MSC-related renoprotection. MSCs may prove to be an effective, innovative and affordable treatment for moderate and severe AKI. However, more studies need to be performed to provide a more comprehensive global understanding of MSC-related therapies and to ensure their safety for future clinical applications.
Resumo:
BACKGROUND: In the alpha subclass of proteobacteria iron homeostasis is controlled by diverse iron responsive regulators. Caulobacter crescentus, an important freshwater α-proteobacterium, uses the ferric uptake repressor (Fur) for such purpose. However, the impact of the iron availability on the C. crescentus transcriptome and an overall perspective of the regulatory networks involved remain unknown. RESULTS: In this work we report the identification of iron-responsive and Fur-regulated genes in C. crescentus using microarray-based global transcriptional analyses. We identified 42 genes that were strongly upregulated both by mutation of fur and by iron limitation condition. Among them, there are genes involved in iron uptake (four TonB-dependent receptor gene clusters, and feoAB), riboflavin biosynthesis and genes encoding hypothetical proteins. Most of these genes are associated with predicted Fur binding sites, implicating them as direct targets of Fur-mediated repression. These data were validated by β-galactosidase and EMSA assays for two operons encoding putative transporters. The role of Fur as a positive regulator is also evident, given that 27 genes were downregulated both by mutation of fur and under low-iron condition. As expected, this group includes many genes involved in energy metabolism, mostly iron-using enzymes. Surprisingly, included in this group are also TonB-dependent receptors genes and the genes fixK, fixT and ftrB encoding an oxygen signaling network required for growth during hypoxia. Bioinformatics analyses suggest that positive regulation by Fur is mainly indirect. In addition to the Fur modulon, iron limitation altered expression of 113 more genes, including induction of genes involved in Fe-S cluster assembly, oxidative stress and heat shock response, as well as repression of genes implicated in amino acid metabolism, chemotaxis and motility. CONCLUSIONS: Using a global transcriptional approach, we determined the C. crescentus iron stimulon. Many but not all of iron responsive genes were directly or indirectly controlled by Fur. The iron limitation stimulon overlaps with other regulatory systems, such as the RpoH and FixK regulons. Altogether, our results showed that adaptation of C. crescentus to iron limitation not only involves increasing the transcription of iron-acquisition systems and decreasing the production of iron-using proteins, but also includes novel genes and regulatory mechanisms
Resumo:
Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.
Resumo:
Modern food production is a complex, globalized system in which what we eat and how it is produced are increasingly disconnected. This thesis examines some of the ways in which global trade has changed the mix of inputs to food and feed, and how this affects food security and our perceptions of sustainability. One useful indicator of the ecological impact of trade in food and feed products is the Appropriated Ecosystem Areas (ArEAs), which estimates the terrestrial and aquatic areas needed to produce all the inputs to particular products. The method is introduced in Paper I and used to calculate and track changes in imported subsidies to Swedish agriculture over the period 1962-1994. In 1994, Swedish consumers needed agricultural areas outside their national borders to satisfy more than a third of their food consumption needs. The method is then applied to Swedish meat production in Paper II to show that the term “Made in Sweden” is often a misnomer. In 1999, almost 80% of manufactured feed for Swedish pigs, cattle and chickens was dependent on imported inputs, mainly from Europe, Southeast Asia and South America. Paper III examines ecosystem subsidies to intensive aquaculture in two nations: shrimp production in Thailand and salmon production in Norway. In both countries, aquaculture was shown to rely increasingly on imported subsidies. The rapid expansion of aquaculture turned these countries from fishmeal net exporters to fishmeal net importers, increasingly using inputs from the Southeastern Pacific Ocean. As the examined agricultural and aquacultural production systems became globalized, levels of dependence on other nations’ ecosystems, the number of external supply sources, and the distance to these sources steadily increased. Dependence on other nations is not problematic, as long as we are able to acknowledge these links and sustainably manage resources both at home and abroad. However, ecosystem subsidies are seldom recognized or made explicit in national policy or economic accounts. Economic systems are generally not designed to receive feedbacks when the status of remote ecosystems changes, much less to respond in an ecologically sensitive manner. Papers IV and V discuss the problem of “masking” of the true environmental costs of production for trade. One of our conclusions is that, while the ArEAs approach is a useful tool for illuminating environmentally-based subsidies in the policy arena, it does not reflect all of the costs. Current agricultural and aquacultural production methods have generated substantial increases in production levels, but if policy continues to support the focus on yield and production increases alone, taking the work of ecosystems for granted, vulnerability can result. Thus, a challenge is to develop a set of complementary tools that can be used in economic accounting at national and international scales that address ecosystem support and performance. We conclude that future resilience in food production systems will require more explicit links between consumers and the work of supporting ecosystems, locally and in other regions of the world, and that food security planning will require active management of the capacity of all involved ecosystems to sustain food production.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Resumo:
[EN] Marine N2 fixing microorganisms, termed diazotrophs, are a key functional group in marine pelagic ecosystems. The biological fixation of dinitrogen (N2) to bioavailable nitrogen provides an important new source of nitrogen for pelagic marine ecosystems 5 and influences primary productivity and organic matter export to the deep ocean. As one of a series of efforts to collect biomass and rates specific to different phytoplankton functional groups, we have constructed a database on diazotrophic organisms in the global pelagic upper ocean by compiling about 12 000 direct field measurements of cyanobacterial diazotroph abundances (based on microscopic cell counts or qPCR 10 assays targeting the nifH genes) and N2 fixation rates. Biomass conversion factors are estimated based on cell sizes to convert abundance data to diazotrophic biomass. The database is limited spatially, lacking large regions of the ocean especially in the Indian Ocean. The data are approximately log-normal distributed, and large variances exist in most sub-databases with non-zero values differing 5 to 8 orders of magnitude. 15 Lower mean N2 fixation rate was found in the North Atlantic Ocean than the Pacific Ocean. Reporting the geometric mean and the range of one geometric standard error below and above the geometric mean, the pelagic N2 fixation rate in the global ocean is estimated to be 62 (53–73) TgNyr−1 and the pelagic diazotrophic biomass in the global ocean is estimated to be 4.7 (2.3–9.6) TgC from cell counts and to 89 (40–20 200) TgC from nifH-based abundances. Uncertainties related to biomass conversion factors can change the estimate of geometric mean pelagic diazotrophic biomass in the global ocean by about ±70 %. This evolving database can be used to study spatial and temporal distributions and variations of marine N2 fixation, to validate geochemical estimates and to parameterize and validate biogeochemical models. The database is 25 stored in PANGAEA (http://doi.pangaea.de/10.1594/PANGAEA.774851).
Resumo:
A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.
Resumo:
This doctoral work gains deeper insight into the dynamics of knowledge flows within and across clusters, unfolding their features, directions and strategic implications. Alliances, networks and personnel mobility are acknowledged as the three main channels of inter-firm knowledge flows, thus offering three heterogeneous measures to analyze the phenomenon. The interplay between the three channels and the richness of available research methods, has allowed for the elaboration of three different papers and perspectives. The common empirical setting is the IT cluster in Bangalore, for its distinguished features as a high-tech cluster and for its steady yearly two-digit growth around the service-based business model. The first paper deploys both a firm-level and a tie-level analysis, exploring the cases of 4 domestic companies and of 2 MNCs active the cluster, according to a cluster-based perspective. The distinction between business-domain knowledge and technical knowledge emerges from the qualitative evidence, further confirmed by quantitative analyses at tie-level. At firm-level, the specialization degree seems to be influencing the kind of knowledge shared, while at tie-level both the frequency of interaction and the governance mode prove to determine differences in the distribution of knowledge flows. The second paper zooms out and considers the inter-firm networks; particularly focusing on the role of cluster boundary, internal and external networks are analyzed, in their size, long-term orientation and exploration degree. The research method is purely qualitative and allows for the observation of the evolving strategic role of internal network: from exploitation-based to exploration-based. Moreover, a causal pattern is emphasized, linking the evolution and features of the external network to the evolution and features of internal network. The final paper addresses the softer and more micro-level side of knowledge flows: personnel mobility. A social capital perspective is here developed, which considers both employees’ acquisition and employees’ loss as building inter-firm ties, thus enhancing company’s overall social capital. Negative binomial regression analyses at dyad-level test the significant impact of cluster affiliation (cluster firms vs non-cluster firms), industry affiliation (IT firms vs non-IT fims) and foreign affiliation (MNCs vs domestic firms) in shaping the uneven distribution of personnel mobility, and thus of knowledge flows, among companies.
Resumo:
The intensity of regional specialization in specific activities, and conversely, the level of industrial concentration in specific locations, has been used as a complementary evidence for the existence and significance of externalities. Additionally, economists have mainly focused the debate on disentangling the sources of specialization and concentration processes according to three vectors: natural advantages, internal, and external scale economies. The arbitrariness of partitions plays a key role in capturing these effects, while the selection of the partition would have to reflect the actual characteristics of the economy. Thus, the identification of spatial boundaries to measure specialization becomes critical, since most likely the model will be adapted to different scales of distance, and be influenced by different types of externalities or economies of agglomeration, which are based on the mechanisms of interaction with particular requirements of spatial proximity. This work is based on the analysis of the spatial aspect of economic specialization supported by the manufacturing industry case. The main objective is to propose, for discrete and continuous space: i) a measure of global specialization; ii) a local disaggregation of the global measure; and iii) a spatial clustering method for the identification of specialized agglomerations.
Resumo:
Ocean acidification is an effect of the rise in atmospheric CO2, which causes a reduction in the pH of the ocean and generates a number of changes in seawater chemistry and consequently potentially impacts seawater life. The effect of ocean acidification on metabolic processes (such as net community production and community respiration and on particulate organic carbon (POC) concentrations was investigated in summer 2012 at Cap de la Revellata in Corsica (Calvi, France). Coastal surface water was enclosed in 9 mesocosms and subjected to 6 pCO2 levels (3 replicated controls and 6 perturbations) for approximately one month. No trend was found in response to increasing pCO2 in any of the biological and particulate analyses. Community respiration was relatively stable throughout the experiment in all mesocosms, and net community production was most of the time close to zero. Similarly, POC concentrations were not affected by acidification during the whole experimental period. Such as the global ocean, the Mediterranean Sea has an oligotrophic nature. Based on present results, it seems likely that seawater acidification will not have significant effects on photosynthetic rates, microbial metabolism and carbon transport.
Resumo:
This study poses as its objective the genetic characterization of the ancient population of the Great White shark, Carcharodon carcharias, L.1758, present in the Mediterranean Sea. Using historical evidence, for the most part buccal arches but also whole, stuffed examples from various national museums, research institutes and private collections, a dataset of 18 examples coming from the Mediterranean Sea has been created, in order to increase the informations regarding this species in the Mediterranean. The importance of the Mediterranean provenance derives from the fact that a genetic characterization of this species' population does not exist, and this creates gaps in the knowledge of this species in the Mediterranean. The genetic characterization of the individuals will initially take place by the extraction of the ancient DNA and the analysis of the variations in the sequence markers of the mitochondrial DNA. This approach has allowed the genetic comparison between ancient populations of the Mediterranean and contemporary populations of the same geographical area. In addition, the genetic characterization of the population of white sharks of the Mediterranean, has allowed a genetic comparison with populations from global "hot spots", using published sequences in online databases (NCBI, GenBank). Analyzing the variability of the dataset, both in terms space and time, I assessed the evolutionary relationships of the Mediterranean population of Great Whites with the global populations (Australia/New Zealand, South Africa, Pacific USA, West Atlantic), and the temporal trend of the Mediterranean population variability. This method based on the sequencing of two portions of mitochondrial DNA genes, markers showed us how the population of Great White Sharks in the Mediterranean, is genetically more similar to the populations of the Australia Pacific ocean, American Pacific Ocean, rather than the population of South Africa, and showing also how the population of South Africa is abnormally distant from all other clusters. Interestingly, these results are inconsistent with the results from tagging of this species. In addition, there is evidence of differences between the ancient population of the Mediterranean with the modern one. This differentiation between the ancient and modern population of white shark can be the result of events impacting on this species occurred over the last two centuries.
Resumo:
In the last couple of decades we assisted to a reappraisal of spatial design-based techniques. Usually the spatial information regarding the spatial location of the individuals of a population has been used to develop efficient sampling designs. This thesis aims at offering a new technique for both inference on individual values and global population values able to employ the spatial information available before sampling at estimation level by rewriting a deterministic interpolator under a design-based framework. The achieved point estimator of the individual values is treated both in the case of finite spatial populations and continuous spatial domains, while the theory on the estimator of the population global value covers the finite population case only. A fairly broad simulation study compares the results of the point estimator with the simple random sampling without replacement estimator in predictive form and the kriging, which is the benchmark technique for inference on spatial data. The Monte Carlo experiment is carried out on populations generated according to different superpopulation methods in order to manage different aspects of the spatial structure. The simulation outcomes point out that the proposed point estimator has almost the same behaviour as the kriging predictor regardless of the parameters adopted for generating the populations, especially for low sampling fractions. Moreover, the use of the spatial information improves substantially design-based spatial inference on individual values.