981 resultados para Output-only Modal Analysis
Resumo:
The measurement of inter-connectedness in an economy using input-output tables is not new, however much of the previous literature has not had any explicit dynamic dimension. Studies have tried to estimate the degree of inter-relatedness for an economy at a given point in time using one input-output table, some have compared different economies at a point in time but few have looked at the question of how interconnectedness within an economy changes over time. The publication in 2010 of a consistent series of input-output tables for Scotland offers the researcher the opportunity to track changes in the degree of inter-connectedness over the seven year period 1998 to 2007. The paper is in two parts. A simple measure of inter-connectedness is introduced in the first part of the paper and applied to the Scottish tables. In the second part of the paper an extraction method is applied to sector by sector to the tables in order to estimate how interconnectedness has changed over time for each industrial sector.
Resumo:
In an input-output context the impact of any particular industrial sector is commonly measured in terms of the output multiplier for that industry. Although such measures are routinely calculated and often used to guide regional industrial policy the behaviour of such measures over time is an area that has attracted little academic study. The output multipliers derived from any one table will have a distribution; for some industries the multiplier will be relatively high, for some it will be relatively low. The recentpublication of consistent input-output tables for the Scottish economy makes it possible to examine trends in this mdistribution over the ten year period 1998-2007. This is done by comparing the means and other summary measures of the distributions, the histograms and the cumulative densities. The results indicate a tendency for the multipliers to increase over the period. A Markov chain modelling approach suggests that this drift is a slow but long term phenomenon which appears not to tend to an equilibrium state. The prime reason for the increase in the output multipliers is traced to a decline in the relative importance of imported (both from the rest of the UK and the rest of the world) intermediate inputs used by Scottish industries. This suggests that models calibrated on the set of tables might have to be interpreted with caution.
Resumo:
The purpose of this paper is to highlight the curiously circular course followed by mainstream macroeconomic thinking in recent times. Having broken from classical orthodoxy in the late 1930s via Keynes’s General Theory, over the last three or four decades the mainstream conventional wisdom, regressing rather than progressing, has now come to embrace a conception of the working of the macroeconomy which is again of a classical, essentially pre-Keynesian, character. At the core of the analysis presented in the typical contemporary macro textbook is the (neo)classical model of the labour market, which represents employment as determined (given conditions of productivity) by the terms of labour supply. While it is allowed that changes in aggregate demand may temporarily affect output and employment, the contention is that in due course employment will automatically return to its ‘natural’ (full employment) level. Unemployment is therefore identified as a merely frictional or voluntary phenomenon: involuntary unemployment - in other words persisting demand-deficient unemployment - is entirely absent from the picture. Variations in aggregate demand are understood to have a lasting impact only on the price level, not on output and employment. This in effect amounts to a return to a Pigouvian conception such as targeted by Keynes in the General Theory. We take the view that this reversion to ideas which should by now be obsolete reflects not the discovery of logical or empirical deficiencies in the Keynes analysis, but results rather from doctrinaire blindness and failure of scholarship on account of which essential features of the Keynes theory have been overlooked or misrepresented. There is an urgent need for a critical appraisal of the current conventional macroeconomic wisdom.
Resumo:
The measurement of inter-connectedness in an economy using input-output tables is not new, however much of the previous literature has not had any explicit dynamic dimension. Studies have tried to estimate the degree of inter-relatedness for an economy at a given point in time using one input-output table, some have compared different economies at a point in time but few have looked at the question of how interconnectedness within an economy changes over time. The publication in 2010 of a consistent series of input-output tables for Scotland offers the researcher the opportunity to track changes in the degree of inter-connectedness over the seven year period 1998 to 2007. The paper is in two parts. A simple measure of inter-connectedness is introduced in the first part of the paper and applied to the Scottish tables. In the second part of the paper an extraction method is applied to sector by sector to the tables in order to estimate how interconnectedness has changed over time for each industrial sector.
Resumo:
Culture forms of four strains of Endotrypanum (E. schaudinni and E. monterogeii) were processed for transmission electron microscopy and analyzed at the ultrastructural level. Quantitative data about some cytoplasmic organelles were obeined by stereology. All culture forms were promastigotes. In their cytoplasm four different organelles could be found: lipid inclusions (0,2-0,4 µm in diameter), mebrane-bounded vacuoles (0.10-0,28 µm in diameter), glycosomes (0,2-0,3 µm in diameter), and the mitochondrion. The kenetoplast appears as a thin band, except for the strain IM201, which possesses a broader structure, and possibly is not a member of this genus. Clusters of virus-like particles were seen in the cytoplasm of the strain LV88. The data obtained show that all strains have the typical morphological feature of the trypanosomatids. Only strain IM201 could be differentiated from the others, due to its larger kenetoplast-DNA network and its large mitochondrial and glycosomal relative volume. The morphometrical data did not allow the differentiation between E. schaudinni (strains IM217 and M6226) and E. monterogeii (strain LV88).
Resumo:
The Oswaldo Cruz Foundation produces most of the yellow fever (YF) vaccine prepared world wide. As part of a broader approach to determine the genetic variability in YF l7D seeds and vaccines and its relevance to viral attenuation the 17DD virus was purifed directly from chick embryo homogenates which is the source of virus used for vaccination of millions of people in Brazil and other countries for half a century. Neutralization and hemagglutination tests showed that the purified virus is similar to the original stock. Furthermore, radioimmune precipitation of 35S-methionine-labeled viral proteins using mouse hyperimmune ascitic fluid revealed identical patterns for the purified 17DD virus and the YF l7D-204 strain except for the 17DD E protein which migrated slower on SDS-PAGE. This difference is likely to be due to N-linked glycosylation. Finally, comparison by northern blot nybridization of virion RNAs of purified 17DD with two other strains of YF virus only fenome-sized molecules for all three viruses. These observations suggest that vaccine phenotype is primarily associated with the accumulation of mutations.
Resumo:
This study presents a first attempt to extend the “Multi-scale integrated analysis of societal and ecosystem metabolism (MuSIASEM)” approach to a spatial dimension using GIS techniques in the Metropolitan area of Barcelona. We use a combination of census and commercial databases along with a detailed land cover map to create a layer of Common Geographic Units that we populate with the local values of human time spent in different activities according to MuSIASEM hierarchical typology. In this way, we mapped the hours of available human time, in regards to the working hours spent in different locations, putting in evidence the gradients in spatial density between the residential location of workers (generating the work supply) and the places where the working hours are actually taking place. We found a strong three-modal pattern of clumps of areas with different combinations of values of time spent on household activities and on paid work. We also measured and mapped spatial segregation between these two activities and put forward the conjecture that this segregation increases with higher energy throughput, as the size of the functional units must be able to cope with the flow of exosomatic energy. Finally, we discuss the effectiveness of the approach by comparing our geographic representation of exosomatic throughput to the one issued from conventional methods.
Resumo:
El presente proyecto tiene como finalidad el análisis de la Finca la Esperanza, en la localidad de Pueblo Nuevo, en la República de Nicaragua. En dicho análisis se estudian parámetros edáficos, calidad de agua y económicos, con el objetivo de poder plantear una propuesta de mejora en los rendimientos de la finca tanto a nivel económico como ambiental. La iniciativa de realizar este estudio surge de las necesidades del propio agricultor, que plantea el desarrollo rural sobre la base de actividades respetuosas con el medio. Las estrategias a los problemas sociales originados por la producción agraria industrial, como el éxodo rural, la pérdida de la agricultura convencional, la nueva función de los espacios agrarios de la sociedad, etc., se formalizaron en el denominado Desarrollo Sostenible. A nivel de finca cualquier concepción de sustentabilidad necesita que el agrosistema sea considerado como un ecosistema, en el que la investigación y la producción busquen no solamente altos rendimientos sino la optimización del sistema como un todo. Tal propósito requiere armonizar una viabilidad económica con otras variables, como estabilidad ecológica y equidad social. Para ello, los principios básicos de un sistema sostenible son: la conservación de los recursos renovables, la adaptación del cultivo al ambiente y el mantenimiento de un moderado pero sostenible nivel de producción. Como resultado de los análisis y observaciones hechas a lo largo de la estancia en la finca, se propone un plan de actuación de 10 años de duración en los cuales se procura alcanzar no únicamente unas cuotas máximas de rendimiento sino también la supervivencia de los recursos, base de la familia Videa Vanegas, propietarios.
Resumo:
Limiting dilution analysis was used to quantify Trypanosoma cruzi in the lymph nodes, liver and heart of Swiss and C57 B1/10 mice. The results showed that, in Swiss and B1/10 mice infected with T. cruzi Y strain, the number of parasites/mg of tissue increased during the course of the infection in both types of mice, although a grater number of parasites were observed in heart tissue from Swiss mice than from B1/10. With regard to liver tissue, it was observed that the parasite load in the initial phase of infection was higher than in heart. In experiments using T. cruzi Colombian strain, the parasite load in the heart of Swiss and B1/10 mice increased relatively slowly, although high levels of parasitization were nonetheless observable by the end of the infection. As for the liver and lymph nodes, the concentration of parasites was lower over the entire course of infection than in heart. Both strains thus maintained their characteristic tissue tropisms. The limiting dilution assay (LDA) proved to be an appropriate method for more precise quantification of T. cruzi, comparing favorably with other direct microscopic methods that only give approximate scores.
Resumo:
This guide introduces Data Envelopment Analysis (DEA), a performance measurement technique, in such a way as to be appropriate to decision makers with little or no background in economics and operational research. The use of mathematics is kept to a minimum. This guide therefore adopts a strong practical approach in order to allow decision makers to conduct their own efficiency analysis and to easily interpret results. DEA helps decision makers for the following reasons: - By calculating an efficiency score, it indicates if a firm is efficient or has capacity for improvement. - By setting target values for input and output, it calculates how much input must be decreased or output increased in order to become efficient. - By identifying the nature of returns to scale, it indicates if a firm has to decrease or increase its scale (or size) in order to minimize the average cost. - By identifying a set of benchmarks, it specifies which other firms' processes need to be analysed in order to improve its own practices.
Resumo:
Abstract Context. Seizures during intoxications with pharmaceuticals are a well-known complication. However, only a few studies report on drugs commonly involved and calculate the seizure potential of these drugs. Objectives. To identify the pharmaceutical drugs most commonly associated with seizures after single-agent overdose, the seizure potential of these pharmaceuticals, the age-distribution of the cases with seizures and the ingested doses. Methods. A retrospective review of acute single-agent exposures to pharmaceuticals reported to the Swiss Toxicological Information Centre (STIC) between January 1997 and December 2010 was conducted. Exposures which resulted in at least one seizure were identified. The seizure potential of a pharmaceutical was calculated by dividing the number of cases with seizures by the number of all cases recorded with that pharmaceutical. Data were analyzed using descriptive statistics. Results. We identified 15,441 single-agent exposures. Seizures occurred in 313 cases. The most prevalent pharmaceuticals were mefenamic acid (51 of the 313 cases), citalopram (34), trimipramine (27), venlafaxine (23), tramadol (15), diphenhydramine (14), amitriptyline (12), carbamazepine (11), maprotiline (10), and quetiapine (10). Antidepressants were involved in 136 cases. Drugs with a high seizure potential were bupropion (31.6%, seizures in 6 of 19 cases, 95% CI: 15.4-50.0%), maprotiline (17.5%, 10/57, 95% CI: 9.8-29.4%), venlafaxine (13.7%, 23/168, 95% CI: 9.3-19.7%), citalopram (13.1%, 34/259, 95% CI: 9.5-17.8%), and mefenamic acid (10.9%, 51/470, 95% CI: 8.4-14.0%). In adolescents (15-19y/o) 23.9% (95% CI: 17.6-31.7%) of the cases involving mefenamic acid resulted in seizures, but only 5.7% (95% CI: 3.3-9.7%) in adults (≥ 20y/o; p < 0.001). For citalopram these numbers were 22.0% (95% CI: 12.8-35.2%) and 10.9% (95% CI: 7.1-16.4%), respectively (p = 0.058). The probability of seizures with mefenamic acid, citalopram, trimipramine, and venlafaxine increased as the ingested dose increased. Conclusions. Antidepressants were frequently associated with seizures in overdose, but other pharmaceuticals, as mefenamic acid, were also associated with seizures in a considerable number of cases. Bupropion was the pharmaceutical with the highest seizure potential even if overdose with bupropion was uncommon in our sample. Adolescents might be more susceptible to seizures after mefenamic acid overdose than adults. "Part of this work is already published as a conference abstract for the XXXIV International Congress of the European Association of Poisons Centres and Clinical Toxicologists (EAPCCT) 27-30 May 2014, Brussels, Belgium." Abstract 8, Clin Toxicol 2014;52(4):298.
Resumo:
The Kilombero Malaria Project (KMP) attemps to define opperationally useful indicators of levels of transmission and disease and health system relevant monitoring indicators to evaluate the impact of disease control at the community or health facility level. The KMP is longitudinal community based study (N = 1024) in rural Southern Tanzania, investigating risk factors for malarial morbidity and developing household based malaria control strategies. Biweekly morbidity and bimonthly serological, parasitological and drug consumption surveys are carried out in all study households. Mosquito densities are measured biweekly in 50 sentinel houses by timed light traps. Determinants of transmission and indicators of exposure were not strongly aggregated within households. Subjective morbidity (recalled fever), objective morbidity (elevated body temperature and high parasitaemia) and chloroquine consumption were strongly aggregated within a few households. Nested analysis of anti-NANP40 antibody suggest that only approximately 30% of the titer variance can explained by household clustering and that the largest proportion of antibody titer variability must be explained by non-measured behavioral determinants relating to an individual's level of exposure within a household. Indicators for evaluation and monitoring and outcome measures are described within the context of health service management to describe control measure output in terms of community effectiveness.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
Background: Despite their relevance to the prevention of sexually transmitted infections, there are few data on the frequency of recourse to prostitution in the male population in Switzerland. Using data gathered for the evaluation of the Swiss AIDS prevention strategy, we analysed net aggregate change and cohort-based change in lifetime prevalence of recourse to prostitution. Methods: Seven repeated cross-sectional telephone surveys of the general population aged 17-45 years (17-30 years only for the 1987 and 1988 surveys) were undertaken from 1987 to 2000 providing information on sexual behaviour including men's recourse to prostitution (total n¼9318). Age categories were: 17-20, 21-25, 26-30, 31-35, 36-40 and 41-45 years. Prevalence at 17-30 years was available in all surveys and prevalence at 41-45 was available for 1989-2000, though not for the same cohorts. Intra-cohort increase in prevalence over 10 years was analysed using truncated information for cohorts aged 21-25 and 26-30 years in 1987 and 1990. Population estimates were computed with 95% confidence intervals (CI). Results: No net change occurred in the 17-45 years male population prevalence between 1989 (17.6%, CI ¼ 15.4; 20.0) and 2000 (17.7%, CI ¼ 15.6; 20.0). The median starting prevalence of recourse to prostitution at age 17-20 was 4.8% (in 1989, CI ¼ 2.0; 9.7) and the range was from 1.8 (in 1994) to 10.4% (in 1990). The median ending prevalence at age 41-45 was 21.9% (in 1994, CI 16.7; 27.9) and the range was from 17.9 (in 2000) to 26.1% (in 1992). No clear trend was observed in either starting or ending prevalence. Intra-cohort evolution of the 1997 and 1990 cohorts was very similar. Conclusions: Based on available data, there was no net (aggregate) change in the prevalence of recourse to prostitution by males in Switzerland between 1989 and 2000. Within the time frame available, intra-cohort evolution was also very similar.
Resumo:
This paper presents an initial challenge to tackle the every so "tricky" points encountered when dealing with energy accounting, and thereafter illustrates how such a system of accounting can be used when assessing for the metabolic changes in societies. The paper is divided in four main sections. The first three, present a general discussion on the main issues encountered when conducting energy analyses. The last section, subsequently, combines this heuristic approach to the actual formalization of it, in quantitative terms, for the analysis of possible energy scenarios. Section one covers the broader issue of how to account for the relevant categories used when accounting for Joules of energy; emphasizing on the clear distinction between Primary Energy Sources (PES) (which are the physical exploited entities that are used to derive useable energy forms (energy carriers)) and Energy Carriers (EC) (the actual useful energy that is transmitted for the appropriate end uses within a society). Section two sheds light on the concept of Energy Return on Investment (EROI). Here, it is emphasized that, there must already be a certain amount of energy carriers available to be able to extract/exploit Primary Energy Sources to thereafter generate a net supply of energy carriers. It is pointed out that this current trend of intense energy supply has only been possible to the great use and dependence on fossil energy. Section three follows up on the discussion of EROI, indicating that a single numeric indicator such as an output/input ratio is not sufficient in assessing for the performance of energetic systems. Rather an integrated approach that incorporates (i) how big the net supply of Joules of EC can be, given an amount of extracted PES (the external constraints); (ii) how much EC needs to be invested to extract an amount of PES; and (iii) the power level that it takes for both processes to succeed, is underlined. Section four, ultimately, puts the theoretical concepts at play, assessing for how the metabolic performances of societies can be accounted for within this analytical framework.