933 resultados para Incommensurability of values
Resumo:
The number of record-breaking events expected to occur in a strictly stationary time-series depends only on the number of values in the time-series, regardless of distribution. This holds whether the events are record-breaking highs or lows and whether we count from past to present or present to past. However, these symmetries are broken in distinct ways by trends in the mean and variance. We define indices that capture this information and use them to detect weak trends from multiple time-series. Here, we use these methods to answer the following questions: (1) Is there a variability trend among globally distributed surface temperature time-series? We find a significant decreasing variability over the past century for the Global Historical Climatology Network (GHCN). This corresponds to about a 10% change in the standard deviation of inter-annual monthly mean temperature distributions. (2) How are record-breaking high and low surface temperatures in the United States affected by time period? We investigate the United States Historical Climatology Network (USHCN) and find that the ratio of record-breaking highs to lows in 2006 increases as the time-series extend further into the past. When we consider the ratio as it evolves with respect to a fixed start year, we find it is strongly correlated with the ensemble mean. We also compare the ratios for USHCN and GHCN (minus USHCN stations). We find the ratios grow monotonically in the GHCN data set, but not in the USHCN data set. (3) Do we detect either mean or variance trends in annual precipitation within the United States? We find that the total annual and monthly precipitation in the United States (USHCN) has increased over the past century. Evidence for a trend in variance is inconclusive.
Resumo:
BACKGROUND: Sedation protocols, including the use of sedation scales and regular sedation stops, help to reduce the length of mechanical ventilation and intensive care unit stay. Because clinical assessment of depth of sedation is labor-intensive, performed only intermittently, and interferes with sedation and sleep, processed electrophysiological signals from the brain have gained interest as surrogates. We hypothesized that auditory event-related potentials (ERPs), Bispectral Index (BIS), and Entropy can discriminate among clinically relevant sedation levels. METHODS: We studied 10 patients after elective thoracic or abdominal surgery with general anesthesia. Electroencephalogram, BIS, state entropy (SE), response entropy (RE), and ERPs were recorded immediately after surgery in the intensive care unit at Richmond Agitation-Sedation Scale (RASS) scores of -5 (very deep sedation), -4 (deep sedation), -3 to -1 (moderate sedation), and 0 (awake) during decreasing target-controlled sedation with propofol and remifentanil. Reference measurements for baseline levels were performed before or several days after the operation. RESULTS: At baseline, RASS -5, RASS -4, RASS -3 to -1, and RASS 0, BIS was 94 [4] (median, IQR), 47 [15], 68 [9], 75 [10], and 88 [6]; SE was 87 [3], 46 [10], 60 [22], 74 [21], and 87 [5]; and RE was 97 [4], 48 [9], 71 [25], 81 [18], and 96 [3], respectively (all P < 0.05, Friedman Test). Both BIS and Entropy had high variabilities. When ERP N100 amplitudes were considered alone, ERPs did not differ significantly among sedation levels. Nevertheless, discriminant ERP analysis including two parameters of principal component analysis revealed a prediction probability PK value of 0.89 for differentiating deep sedation, moderate sedation, and awake state. The corresponding PK for RE, SE, and BIS was 0.88, 0.89, and 0.85, respectively. CONCLUSIONS: Neither ERPs nor BIS or Entropy can replace clinical sedation assessment with standard scoring systems. Discrimination among very deep, deep to moderate, and no sedation after general anesthesia can be provided by ERPs and processed electroencephalograms, with similar P(K)s. The high inter- and intraindividual variability of Entropy and BIS precludes defining a target range of values to predict the sedation level in critically ill patients using these parameters. The variability of ERPs is unknown.
Resumo:
OBJECTIVES: CD4 cell count and plasma viral load are well known predictors of AIDS and mortality in HIV-1-infected patients treated with combination antiretroviral therapy (cART). This study investigated, in patients treated for at least 3 years, the respective prognostic importance of values measured at cART initiation, and 6 and 36 months later, for AIDS and death. METHODS: Patients from 15 HIV cohorts included in the ART Cohort Collaboration, aged at least 16 years, antiretroviral-naive when they started cART and followed for at least 36 months after start of cART were eligible. RESULTS: Among 14 208 patients, the median CD4 cell counts at 0, 6 and 36 months were 210, 320 and 450 cells/microl, respectively, and 78% of patients achieved viral load less than 500 copies/ml at 6 months. In models adjusted for characteristics at cART initiation and for values at all time points, values at 36 months were the strongest predictors of subsequent rates of AIDS and death. Although CD4 cell count and viral load at cART initiation were no longer prognostic of AIDS or of death after 36 months, viral load at 6 months and change in CD4 cell count from 6 to 36 months were prognostic for rates of AIDS from 36 months. CONCLUSIONS: Although current values of CD4 cell count and HIV-1 RNA are the most important prognostic factors for subsequent AIDS and death rates in HIV-1-infected patients treated with cART, changes in CD4 cell count from 6 to 36 months and the value of 6-month HIV-1 RNA are also prognostic for AIDS.
Resumo:
Software metrics offer us the promise of distilling useful information from vast amounts of software in order to track development progress, to gain insights into the nature of the software, and to identify potential problems. Unfortunately, however, many software metrics exhibit highly skewed, non-Gaussian distributions. As a consequence, usual ways of interpreting these metrics --- for example, in terms of "average" values --- can be highly misleading. Many metrics, it turns out, are distributed like wealth --- with high concentrations of values in selected locations. We propose to analyze software metrics using the Gini coefficient, a higher-order statistic widely used in economics to study the distribution of wealth. Our approach allows us not only to observe changes in software systems efficiently, but also to assess project risks and monitor the development process itself. We apply the Gini coefficient to numerous metrics over a range of software projects, and we show that many metrics not only display remarkably high Gini values, but that these values are remarkably consistent as a project evolves over time.
Resumo:
Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.
Resumo:
Degeneration of the intervertebral disc, sometimes associated with low back pain and abnormal spinal motions, represents a major health issue with high costs. A non-invasive degeneration assessment via qualitative or quantitative MRI (magnetic resonance imaging) is possible, yet, no relation between mechanical properties and T2 maps of the intervertebral disc (IVD) has been considered, albeit T2 relaxation time values quantify the degree of degeneration. Therefore, MRI scans and mechanical tests were performed on 14 human lumbar intervertebral segments freed from posterior elements and all soft tissues excluding the IVD. Degeneration was evaluated in each specimen using morphological criteria, qualitative T2 weighted images and quantitative axial T2 map data and stiffness was calculated from the load-deflection curves of in vitro compression, torsion, lateral bending and flexion/extension tests. In addition to mean T2, the OTSU threshold of T2 (TOTSU), a robust and automatic histogram-based method that computes the optimal threshold maximizing the distinction of two classes of values, was calculated for anterior, posterior, left and right regions of each annulus fibrosus (AF). While mean T2 and degeneration schemes were not related to the IVDs' mechanical properties, TOTSU computed in the posterior AF correlated significantly with those classifications as well as with all stiffness values. TOTSU should therefore be included in future degeneration grading schemes.
Resumo:
The global ocean is a significant sink for anthropogenic carbon (Cant), absorbing roughly a third of human CO2 emitted over the industrial period. Robust estimates of the magnitude and variability of the storage and distribution of Cant in the ocean are therefore important for understanding the human impact on climate. In this synthesis we review observational and model-based estimates of the storage and transport of Cant in the ocean. We pay particular attention to the uncertainties and potential biases inherent in different inference schemes. On a global scale, three data-based estimates of the distribution and inventory of Cant are now available. While the inventories are found to agree within their uncertainty, there are considerable differences in the spatial distribution. We also present a review of the progress made in the application of inverse and data assimilation techniques which combine ocean interior estimates of Cant with numerical ocean circulation models. Such methods are especially useful for estimating the air–sea flux and interior transport of Cant, quantities that are otherwise difficult to observe directly. However, the results are found to be highly dependent on modeled circulation, with the spread due to different ocean models at least as large as that from the different observational methods used to estimate Cant. Our review also highlights the importance of repeat measurements of hydrographic and biogeochemical parameters to estimate the storage of Cant on decadal timescales in the presence of the variability in circulation that is neglected by other approaches. Data-based Cant estimates provide important constraints on forward ocean models, which exhibit both broad similarities and regional errors relative to the observational fields. A compilation of inventories of Cant gives us a "best" estimate of the global ocean inventory of anthropogenic carbon in 2010 of 155 ± 31 PgC (±20% uncertainty). This estimate includes a broad range of values, suggesting that a combination of approaches is necessary in order to achieve a robust quantification of the ocean sink of anthropogenic CO2.
Resumo:
This is the second part of a two-part paper which offers a new approach to the valuation of ecosystem goods and services. In the first part a simple pre-industrial model was introduced to show how the interdependencies between the three subsystems, society, economy and nature, influence values, and how values change over time. In this second part the assumption of perfect foresight is dropped. I argue that due to novelty and complexity ex ante unpredictable change occurs within the three subsystems society, economy and nature. Again the simple pre-industrial model, which was introduced in part 1, serves as a simple paradigm to show how unpredictable novel change limits the possibility to derive accurate estimates of values.
Resumo:
In the profoundly changing and dynamic world of contemporary audiovisual media, what has remained surprisingly unaffected is regulation. In the European Union, the new Audiovisual Media Services Directive (AVMS), proposed by the European Commission on December 13, 2005, should allegedly rectify this situation. Amending the existing Television without Frontiers Directive, it should offer a fresh approach and meet the challenge of appropriately regulating media in a complex environment. It is meant to achieve a balance between the free circulation of TV broadcast and new audiovisual media and the preservation of values of cultural identity and diversity, while respecting the principles of subsidiarity and proportionality inherent to the Community. The purpose of this paper is to examine whether and how the changes envisaged to the EC audiovisual media regime might influence cultural diversity in Europe. It addresses subsequently the question of whether the new AVMS properly safeguards the balance between competition and the public interest in this regard, or whether cultural diversity remains a mere political banner.
Resumo:
The comparison of radiotherapy techniques regarding secondary cancer risk has yielded contradictory results possibly stemming from the many different approaches used to estimate risk. The purpose of this study was to make a comprehensive evaluation of different available risk models applied to detailed whole-body dose distributions computed by Monte Carlo for various breast radiotherapy techniques including conventional open tangents, 3D conformal wedged tangents and hybrid intensity modulated radiation therapy (IMRT). First, organ-specific linear risk models developed by the International Commission on Radiological Protection (ICRP) and the Biological Effects of Ionizing Radiation (BEIR) VII committee were applied to mean doses for remote organs only and all solid organs. Then, different general non-linear risk models were applied to the whole body dose distribution. Finally, organ-specific non-linear risk models for the lung and breast were used to assess the secondary cancer risk for these two specific organs. A total of 32 different calculated absolute risks resulted in a broad range of values (between 0.1% and 48.5%) underlying the large uncertainties in absolute risk calculation. The ratio of risk between two techniques has often been proposed as a more robust assessment of risk than the absolute risk. We found that the ratio of risk between two techniques could also vary substantially considering the different approaches to risk estimation. Sometimes the ratio of risk between two techniques would range between values smaller and larger than one, which then translates into inconsistent results on the potential higher risk of one technique compared to another. We found however that the hybrid IMRT technique resulted in a systematic reduction of risk compared to the other techniques investigated even though the magnitude of this reduction varied substantially with the different approaches investigated. Based on the epidemiological data available, a reasonable approach to risk estimation would be to use organ-specific non-linear risk models applied to the dose distributions of organs within or near the treatment fields (lungs and contralateral breast in the case of breast radiotherapy) as the majority of radiation-induced secondary cancers are found in the beam-bordering regions.
Resumo:
Recent studies on the avalanche risk in alpine settlements suggested a strong dependency of the development of risk on variations in damage potential. Based on these findings, analyses on probable maximum losses in avalanche-prone areas of the municipality of Davos (CH) were used as an indicator for the long-term development of values at risk. Even if the results were subject to significant uncertainties, they underlined the dependency of today's risk on the historical development of land-use: Small changes in the lateral extent of endangered areas had a considerable impact on the exposure of values. In a second step, temporal variations in damage potential between 1950 and 2000 were compared in two different study areas representing typical alpine socio-economic development patterns: Davos (CH) and Galtür (A). The resulting trends were found to be similar; the damage potential increased significantly in number and value. Thus, the development of natural risk in settlements can for a major part be attributed to long-term shifts in damage potential.
Resumo:
In this study the relationship of religiosity and value priorities is differentiated, based on a multidimensional measurement of different contents of religiosity. The structure of values is conceptualized using Schwartz’ (1992) two orthogonal dimensions of Self-transcendence vs. Self-enhancement and Openness to change vs. Conservation. The relations between these two dimensions and eight religious contents, ranging from open-minded to more close-minded forms of religiosity, were tested in a sample of church attenders (N = 685), gathered in Germany. The results show, that depending on the content of religiosity, different values are preferred (self-direction, universalism, benevolence, tradition and security values). The results indicate the importance of the content of religiosity for predicting value-loaded behaviors.
Resumo:
A new methodology based on combining active and passive remote sensing and simultaneous and collocated radiosounding data to study the aerosol hygroscopic growth effects on the particle optical and microphysical properties is presented. The identification of hygroscopic growth situations combines the analysis of multispectral aerosol particle backscatter coefficient and particle linear depolarization ratio with thermodynamic profiling of the atmospheric column. We analyzed the hygroscopic growth effects on aerosol properties, namely the aerosol particle backscatter coefficient and the volume concentration profiles, using data gathered at Granada EARLINET station. Two study cases, corresponding to different aerosol loads and different aerosol types, are used for illustrating the potential of this methodology. Values of the aerosol particle backscatter coefficient enhancement factors range from 2.1 ± 0.8 to 3.9 ± 1.5, in the ranges of relative humidity 60–90 and 40–83%, being similar to those previously reported in the literature. Differences in the enhancement factor are directly linked to the composition of the atmospheric aerosol. The largest value of the aerosol particle backscatter coefficient enhancement factor corresponds to the presence of sulphate and marine particles that are more affected by hygroscopic growth. On the contrary, the lowest value of the enhancement factor corresponds to an aerosol mixture containing sulphates and slight traces of mineral dust. The Hänel parameterization is applied to these case studies, obtaining results within the range of values reported in previous studies, with values of the γ exponent of 0.56 ± 0.01 (for anthropogenic particles slightly influenced by mineral dust) and 1.07 ± 0.01 (for the situation dominated by anthropogenic particles), showing the convenience of this remote sensing approach for the study of hygroscopic effects of the atmospheric aerosol under ambient unperturbed conditions. For the first time, the retrieval of the volume concentration profiles for these cases using the Lidar Radiometer Inversion Code (LIRIC) allows us to analyze the aerosol hygroscopic growth effects on aerosol volume concentration, observing a stronger increase of the fine mode volume concentration with increasing relative humidity.
Resumo:
Five test runs were performed to assess possible bias when performing the loss on ignition (LOI) method to estimate organic matter and carbonate content of lake sediments. An accurate and stable weight loss was achieved after 2 h of burning pure CaCO3 at 950 °C, whereas LOI of pure graphite at 530 °C showed a direct relation to sample size and exposure time, with only 40-70% of the possible weight loss reached after 2 h of exposure and smaller samples losing weight faster than larger ones. Experiments with a standardised lake sediment revealed a strong initial weight loss at 550 °C, but samples continued to lose weight at a slow rate at exposure of up to 64 h, which was likely the effect of loss of volatile salts, structural water of clay minerals or metal oxides, or of inorganic carbon after the initial burning of organic matter. A further test-run revealed that at 550 °C samples in the centre of the furnace lost more weight than marginal samples. At 950 °C this pattern was still apparent but the differences became negligible. Again, LOI was dependent on sample size. An analytical LOI quality control experiment including ten different laboratories was carried out using each laboratory's own LOI procedure as well as a standardised LOI procedure to analyse three different sediments. The range of LOI values between laboratories measured at 550 °C was generally larger when each laboratory used its own method than when using the standard method. This was similar for 950 °C, although the range of values tended to be smaller. The within-laboratory range of LOI measurements for a given sediment was generally small. Comparisons of the results of the individual and the standardised method suggest that there is a laboratory-specific pattern in the results, probably due to differences in laboratory equipment and/or handling that could not be eliminated by standardising the LOI procedure. Factors such as sample size, exposure time, position of samples in the furnace and the laboratory measuring affected LOI results, with LOI at 550 °C being more susceptible to these factors than LOI at 950 °C. We, therefore, recommend analysts to be consistent in the LOI method used in relation to the ignition temperatures, exposure times, and the sample size and to include information on these three parameters when referring to the method.
Resumo:
The goal of this study was to test the hypothesis that the aggregated state of natural marine particles constrains the sensitivity of optical beam attenuation to particle size. An instrumented bottom tripod was deployed at the 12-m node of the Martha's Vineyard Coastal Observatory to monitor particle size distributions, particle size-versus-settling-velocity relationships, and the beam attenuation coefficient (c(p)) in the bottom boundary layer in September 2007. An automated in situ filtration system on the tripod collected 24 direct estimates of suspended particulate mass (SPM) during each of five deployments. On a sampling interval of 5 min, data from a Sequoia Scientific LISST 100x Type B were merged with data from a digital floc camera to generate suspended particle volume size distributions spanning diameters from approximately 2 mu m to 4 cm. Diameter-dependent densities were calculated from size-versus-settling-velocity data, allowing conversion of the volume size distributions to mass distributions, which were used to estimate SPM every 5 min. Estimated SPM and measured c(p) from the LISST 100x were linearly correlated throughout the experiment, despite wide variations in particle size. The slope of the line, which is the ratio of c(p) to SPM, was 0.22 g m(-2). Individual estimates of c(p):SPM were between 0.2 and 0.4 g m(-2) for volumetric median particle diameters ranging from 10 to 150 mu m. The wide range of values in c(p):SPM in the literature likely results from three factors capable of producing factor-of-two variability in the ratio: particle size, particle composition, and the finite acceptance angle of commercial beam-transmissometers.