924 resultados para Mandatory Helmet Usage.
Resumo:
Användningen av preventivmedel har blivit en allt viktigare fraga i utvecklingsländerna idag, speciellt i Namibia dar fruktsamheten och HIV-prevalensen är höga. Kondomen är det enda allmänt tillgängliga preventivmedlet som skyddar mot könssjukdomar, medan ocksä injektioner, p-piller och andrà metoder kan användas för att förhindra graviditet. Användningen av preventivmedel har upptäckts korrelera med vissa sociodemografiska faktorer, bland annat utbildningsnivå och förmögenhet. Malet med denna undersökning var att studera användningen av preventivmedel, avsikter att använda preventivmedel samt kunskap om HIV/AIDS och andra könssjukdomar bland kvinnor i Namibia. Detta gjordes frän ett historiskt perspektiv genom att studerà användningsmönster frän 1990 till slutet av 2000-talet. Dessutom undersöktes sociodemografiska faktorers, speciellt utbildningens, inverkan på användningen av preventivmedel, likasä sambandet mellan skolningsnivå och preventivmedelsanvändning pä regionnivå. Undersökningen gjordes utgäende frän statistiska Namibia Demographic and Health Survey -material samlade 1992, 2000 och 2006-2007. Prevalenser och användningen av specifika metoder studerades skilt för olika bakgrundsvariabler 1992, 2000 och 2006-2007, och enligt utbildningsnivå och region är 2006-2007. Utbildning mattes skilt pä individ- och aggregatnivå. Sambandet mellan preventivmedelsanvändning och utbildning undersöktes med hjälp av logistisk regression, i vilken sociodemografiska bakgrundsfaktorer kontrollerades i sex modeller. Resultaten visade att användningen av preventivmedel har fördubblats sedan början av 1990-talet. Skillnader mellan kvinnor med olika utbildningsnivåer existerade redan i början av 1990-talet, likaså mellan olika yrkesgrupper. Undersökningen visade att högre utbildning ökar på reventivmedelsanvändningen, också då sociodemografisk bakgrundfaktorer, även utbildning och användning av preventivmedel på aggregatnivå, kontrollerades. Undersökningen antyder att utbildning på aggregatnivå inte ensam påverkar användningen av preventivmedel hos en individ. De sistnämnda resultaten var dock inte statistiskt signifikanta och kan inte generaliseras över namibiska kvinnor i allmänhet.
Resumo:
This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.
Resumo:
An extensive electricity transmission network facilitates electricity trading between Finland, Sweden, Norway and Denmark. Currently most of the area's power generation is traded at NordPool, where the trading volumes have steadily increased since the early 1990's, when the exchange was founded. The Nordic electricity is expected to follow the current trend and further integrate with the other European electricity markets. Hydro power is the source for roughly a half of the supply in the Nordic electricity market and most of the hydro is generated in Norway. The dominating role of hydro power distinguishes the Nordic electricity market from most of the other market places. Production of hydro power varies mainly due to hydro reservoirs and demand for electricity. Hydro reservoirs are affected by water inflows that differ each year. The hydro reservoirs explain remarkably the behaviour of the Nordic electricity markets. Therefore among others, Kauppi and Liski (2008) have developed a model that analyzes the behaviour of the markets using hydro reservoirs as explanatory factors. Their model includes, for example, welfare loss due to socially suboptimal hydro reservoir usage, socially optimal electricity price, hydro reservoir storage and thermal reservoir storage; that are referred as outcomes. However, the model does not explain the real market condition but rather an ideal situation. In the model the market is controlled by one agent, i.e. one agent controls all the power generation reserves; it is referred to as a socially optimal strategy. Article by Kauppi and Liski (2008) includes an assumption where an individual agent has a certain fraction of market power, e.g. 20 % or 30 %. In order to maintain the focus of this thesis, this part of their paper is omitted. The goal of this thesis is two-fold. Firstly we expand the results from the socially optimal strategy for years 2006-08, as the earlier study finishes in 2005. The second objective is to improve on the methods from the previous study. This thesis results several outcomes (SPOT-price and welfare loss, etc.) due to socially optimal actions. Welfare loss is interesting as it describes the inefficiency of the market. SPOT-price is an important output for the market participants as it often has an effect on end users' electricity bills. Another function is to modify and try to improve the model by means of using more accurate input data, e.g. by considering pollution trade rights effect on input data. After modifications to the model, new welfare losses are calculated and compared with the same results before the modifications. The hydro reservoir has the higher explanatory significance in the model followed by thermal power. In Nordic markets, thermal power reserves are mostly nuclear power and other thermal sources (coal, natural gas, oil, peat). It can be argued that hydro and thermal reservoirs determine electricity supply. Roughly speaking, the model takes into account electricity demand and supply, and several parameters related to them (water inflow, oil price, etc.), yielding finally the socially optimal outcomes. The author of this thesis is not aware of any similar model being tested before. There have been some other studies that are close to the Kauppi and Liski (2008) model, but those have a somewhat different focus. For example, a specific feature in the model is the focus on long-run capacity usage that differs from the previous studies on short-run market power. The closest study to the model is from California's wholesale electricity markets that, however, uses different methodology. Work is constructed as follows.
Resumo:
The application of nucleic acid probes, in the detection of pathogenic micro-organisms, has become an integral part of diagnostic technologies. In this study, Plasmodium vivax-specific DNA probes have been identified by carrying out genomic subtractive hybridization. In this approach, the recombinant clones from a P. vivax genomic library are screened with radiolabelled human and P. falciparum DNA. The colonies which react with labelled P. falciparum and human DNA are eliminated and those which do not produce any autoradiographic signal have been subjected to further screening procedures. Three Fl vivax specific DNA probes have been obtained by these repeated screenings. Further analyses indicate that these probes are specific and sensitive enough to detect P. vivax infection in clinical blood samples when used in a non-radioactive DNA hybridization assay. (C) 1995 Academic Press Limited
Resumo:
Fuel cell-based automobiles have gained attention in the last few years due to growing public concern about urban air pollution and consequent environmental problems. From an analysis of the power and energy requirements of a modern car, it is estimated that a base sustainable power of ca. 50 kW supplemented with short bursts up to 80 kW will suffice in most driving requirements. The energy demand depends greatly on driving characteristics but under normal usage is expected to be 200 Wh/km. The advantages and disadvantages of candidate fuel-cell systems and various fuels are considered together with the issue of whether the fuel should be converted directly in the fuel cell or should be reformed to hydrogen onboard the vehicle. For fuel cell vehicles to compete successfully with conventional internal-combustion engine vehicles, it appears that direct conversion fuel cells using probably hydrogen, but possibly methanol, are the only realistic contenders for road transportation applications. Among the available fuel cell technologies, polymer-electrolyte fuel cells directly fueled with hydrogen appear to be the best option for powering fuel cell vehicles as there is every prospect that these will exceed the performance of the internal-combustion engine vehicles but for their first cost. A target cost of $ 50/kW would be mandatory to make polymer-electrolyte fuel cells competitive with the internal combustion engines and can only be achieved with design changes that would substantially reduce the quantity of materials used. At present, prominent car manufacturers are deploying important research and development efforts to develop fuel cell vehicles and are projecting to start production by 2005.
Resumo:
Precision, sophistication and economic factors in many areas of scientific research that demand very high magnitude of compute power is the order of the day. Thus advance research in the area of high performance computing is getting inevitable. The basic principle of sharing and collaborative work by geographically separated computers is known by several names such as metacomputing, scalable computing, cluster computing, internet computing and this has today metamorphosed into a new term known as grid computing. This paper gives an overview of grid computing and compares various grid architectures. We show the role that patterns can play in architecting complex systems, and provide a very pragmatic reference to a set of well-engineered patterns that the practicing developer can apply to crafting his or her own specific applications. We are not aware of pattern-oriented approach being applied to develop and deploy a grid. There are many grid frameworks that are built or are in the process of being functional. All these grids differ in some functionality or the other, though the basic principle over which the grids are built is the same. Despite this there are no standard requirements listed for building a grid. The grid being a very complex system, it is mandatory to have a standard Software Architecture Specification (SAS). We attempt to develop the same for use by any grid user or developer. Specifically, we analyze the grid using an object oriented approach and presenting the architecture using UML. This paper will propose the usage of patterns at all levels (analysis. design and architectural) of the grid development.
Resumo:
A novel approach that can more effectively use the structural information provided by the traditional imaging modalities in multimodal diffuse optical tomographic imaging is introduced. This approach is based on a prior image-constrained-l(1) minimization scheme and has been motivated by the recent progress in the sparse image reconstruction techniques. It is shown that the proposed framework is more effective in terms of localizing the tumor region and recovering the optical property values both in numerical and gelatin phantom cases compared to the traditional methods that use structural information. (C) 2012 Optical Society of America
Resumo:
Monitoring of infrastructural resources in clouds plays a crucial role in providing application guarantees like performance, availability, and security. Monitoring is crucial from two perspectives - the cloud-user and the service provider. The cloud user’s interest is in doing an analysis to arrive at appropriate Service-level agreement (SLA) demands and the cloud provider’s interest is to assess if the demand can be met. To support this, a monitoring framework is necessary particularly since cloud hosts are subject to varying load conditions. To illustrate the importance of such a framework, we choose the example of performance being the Quality of Service (QoS) requirement and show how inappropriate provisioning of resources may lead to unexpected performance bottlenecks. We evaluate existing monitoring frameworks to bring out the motivation for building much more powerful monitoring frameworks. We then propose a distributed monitoring framework, which enables fine grained monitoring for applications and demonstrate with a prototype system implementation for typical use cases.
Resumo:
The advent and evolution of geohazard warning systems is a very interesting study. The two broad fields that are immediately visible are that of geohazard evaluation and subsequent warning dissemination. Evidently, the latter field lacks any systematic study or standards. Arbitrarily organized and vague data and information on warning techniques create confusion and indecision. The purpose of this review is to try and systematize the available bulk of information on warning systems so that meaningful insights can be derived through decidable flowcharts, and a developmental process can be undertaken. Hence, the methods and technologies for numerous geohazard warning systems have been assessed by putting them into suitable categories for better understanding of possible ways to analyze their efficacy as well as shortcomings. By establishing a classification scheme based on extent, control, time period, and advancements in technology, the geohazard warning systems available in any literature could be comprehensively analyzed and evaluated. Although major advancements have taken place in geohazard warning systems in recent times, they have been lacking a complete purpose. Some systems just assess the hazard and wait for other means to communicate, and some are designed only for communication and wait for the hazard information to be provided, which usually is after the mishap. Primarily, systems are left at the mercy of administrators and service providers and are not in real time. An integrated hazard evaluation and warning dissemination system could solve this problem. Warning systems have also suffered from complexity of nature, requirement of expert-level monitoring, extensive and dedicated infrastructural setups, and so on. The user community, which would greatly appreciate having a convenient, fast, and generalized warning methodology, is surveyed in this review. The review concludes with the future scope of research in the field of hazard warning systems and some suggestions for developing an efficient mechanism toward the development of an automated integrated geohazard warning system. DOI: 10.1061/(ASCE)NH.1527-6996.0000078. (C) 2012 American Society of Civil Engineers.
Resumo:
Realization of cloud computing has been possible due to availability of virtualization technologies on commodity platforms. Measuring resource usage on the virtualized servers is difficult because of the fact that the performance counters used for resource accounting are not virtualized. Hence, many of the prevalent virtualization technologies like Xen, VMware, KVM etc., use host specific CPU usage monitoring, which is coarse grained. In this paper, we present a performance monitoring tool for KVM based virtualized machines, which measures the CPU overhead incurred by the hypervisor on behalf of the virtual machine along-with the CPU usage of virtual machine itself. This fine-grained resource usage information, provided by the above tool, can be used for diverse situations like resource provisioning to support performance associated QoS requirements, identification of bottlenecks during VM placements, resource profiling of applications in cloud environments, etc. We demonstrate a use case of this tool by measuring the performance of web-servers hosted on a KVM based virtualized server.
Resumo:
Primates exhibit laterality in hand usage either in terms of (a) hand with which an individual solves a task or while solving a task that requires both hands, executes the most complex action, that is, hand preference, or (b) hand with which an individual executes actions most efficiently, that is, hand performance. Observations from previous studies indicate that laterality in hand usage might reflect specialization of the two hands for accomplishing tasks that require maneuvering dexterity or physical strength. However, no existing study has investigated handedness with regard to this possibility. In this study, we examined laterality in hand usage in urban free-ranging bonnet macaques, Macaca radiata with regard to the above possibility. While solving four distinct food extraction tasks which varied in the number of steps involved in the food extraction process and the dexterity required in executing the individual steps, the macaques consistently used one hand for extracting food (i.e., task requiring maneuvering dexterity)the maneuvering hand, and the other hand for supporting the body (i.e., task requiring physical strength)the supporting hand. Analogously, the macaques used the maneuvering hand for the spontaneous routine activities that involved maneuvering in three-dimensional space, such as grooming, and hitting an opponent during an agonistic interaction, and the supporting hand for those that required physical strength, such as pulling the body up while climbing. Moreover, while solving a task that ergonomically forced the usage of a particular hand, the macaques extracted food faster with the maneuvering hand as compared to the supporting hand, demonstrating the higher maneuvering dexterity of the maneuvering hand. As opposed to the conventional ideas of handedness in non-human primates, these observations demonstrate division of labor between the two hands marked by their consistent usage across spontaneous and experimental tasks requiring maneuvering in three-dimensional space or those requiring physical strength. Am. J. Primatol. 76:576-585, 2014. (c) 2013 Wiley Periodicals, Inc.