943 resultados para Weighted average power tests


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyse the use of the ordered weighted average (OWA) in decision-making giving special attention to business and economic decision-making problems. We present several aggregation techniques that are very useful for decision-making such as the Hamming distance, the adequacy coefficient and the index of maximum and minimum level. We suggest a new approach by using immediate weights, that is, by using the weighted average and the OWA operator in the same formulation. We further generalize them by using generalized and quasi-arithmetic means. We also analyse the applicability of the OWA operator in business and economics and we see that we can use it instead of the weighted average. We end the paper with an application in a business multi-person decision-making problem regarding production management

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Professional cleaning is a basic service occupation with a wide variety of tasks carried out in all kind of different sectors and workplaces by a large workforce. One important risk for cleaning workers is the exposure to chemical substances that are present in cleaning products.Monoethanolamine was found to be often present in cleaning products such as general purpose cleaners, bathroom cleaners, floor cleaners and kitchen cleaners. Monoethanolamine can injure the skin, and exposure to monoethanolamine was associated to asthma even when the air concentrations were low. It is a strong irritant and known to be involved in sensitizing mechanisms. It is very likely that the use of cleaning products containing monoethanolamine gives rise to respiratory and dermal exposures. Therefore there is a need to further investigate the exposures to monoethanolamine for both, respiratory and dermal exposure.The determination of monoethanolamine has traditionally been difficult and analytical methods available are little adapted for occupational exposure assessments. For monoethanolamine air concentrations, a sampling and analytical method was already available and could be used. However, a method to analyses samples for skin exposure assessments as well as samples of skin permeation experiments was missing. Therefore one main objective of this master thesis was to search an already developed and described analytical method for the measurement of monoethanolamine in water solutions, and to set it up in the laboratory. Monoethanolamine was analyzed after a derivatisation reaction with o-pthtaldialdehyde. The derivated fluorescing monoethanolamine was then separated with high performance liquid chromatography and detection took place with a fluorescent detector. The method was found to be suitable for qualitative and quantitative analysis of monoethanolamine. An exposure assessment was conducted in the cleaning sector to measure the respiratory and dermal exposures to monoethanolamine during floor cleaning. Stationary air samples (n=36) were collected in 8 companies and samples for dermal exposures (n=12) were collected in two companies. Air concentrations (Mean = 0.18 mg/m3, Standard Deviation = 0.23 mg/m3, geometric Mean = 0.09 mg/m3, Geometric Standard Deviation = 3.50) detected were mostly below 1/10 of the Swiss 8h time weighted average occupational exposure limit. Factors that influenced the measured monoethanolamine air concentrations were room size, ventilation system and the concentration of monoethanolamine in the cleaning product and amount of monoethanolamine used. Measured skin exposures ranged from 0.6 to 128.4 mg/sample. Some cleaning workers that participated in the skin exposure assessment did not use gloves and had direct contact with the solutions containing the cleaning product and monoethanolamine. During the entire sampling campaign, cleaning workers mostly did not use gloves. Cleaning workers are at risk to be regularly exposed to low air concentrations of monoethanolamine. This exposure may be problematic if a worker suffers from allergic reactions (e.g. Asthma). In that case a substitution of the cleaning product may be a good prevention measure as several different cleaning products are available for similar cleaning tasks. Currently there are no occupational exposure limits to compare the skin exposures that were found. To prevent skin exposures, adaptations of the cleaning techniques and the use of gloves should be considered. The simultaneous skin and airborne exposures might accelerate adverse health effects. Overall the risks caused by exposures to monoethanolamine are considered as low to moderate when the cleaning products are used correctly. Whenever possible, skin exposures should be avoided. Further research should consider especially the dermal exposure routes, as very high exposures might occur by skin contact with cleaning products. Dermatitis but also sensitization might be caused by skin exposures. In addition, new biomedical insights are needed to better understand the risks of the dermal exposure. Therefore skin permeability experiments should be considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is currently a considerable diversity of quantitative measures available for summarizing the results in single-case studies. Given that the interpretation of some of them is difficult due to the lack of established benchmarks, the current paper proposes an approach for obtaining further numerical evidence on the importance of the results, complementing the substantive criteria, visual analysis, and primary summary measures. This additional evidence consists of obtaining the statistical significance of the outcome when referred to the corresponding sampling distribution. This sampling distribution is formed by the values of the outcomes (expressed as data nonoverlap, R-squared, etc.) in case the intervention is ineffective. The approach proposed here is intended to offer the outcome"s probability of being as extreme when there is no treatment effect without the need for some assumptions that cannot be checked with guarantees. Following this approach, researchers would compare their outcomes to reference values rather than constructing the sampling distributions themselves. The integration of single-case studies is problematic, when different metrics are used across primary studies and not all raw data are available. Via the approach for assigning p values it is possible to combine the results of similar studies regardless of the primary effect size indicator. The alternatives for combining probabilities are discussed in the context of single-case studies pointing out two potentially useful methods one based on a weighted average and the other on the binomial test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämän diplomityön tavoitteena oli rakentaa uusi vaihto-omaisuuden arvostamismalli kohdeyritykseen. Vanha arvostamismallioli todettu epätarkaksi ja vaikeasti päivitettäväksi. Uuden mallin oli tarkoitus olla tarkempi ja helpommin päivitettävissä, mutta kumminkin käyttäjäystävällinen. Toisena tavoitteena oli kuvata tuotantoprosessi, jossa kustannukset sitoutuvat tuotteisiin. Työn voi jakaa kolmeen vaiheeseen. Ensin esiteltävä teoria-aineisto pohjautuu pääasiassa vaihto-omaisuuden arvostamisen teoriaan sekä kustannuslaskennan teoriaan. Toisessa vaiheessa kuvataan yrityksen tämänhetkinentuotantoprosessi sekä sen tavoitetila. Viimeisenä vaiheena on arvostamismallin kehittäminen, jossa käydään yksityiskohtaisesti läpi uuden mallin rakenne, logiikka ja sen antamat tulokset. Työn keskeisimpänä lopputuloksena oli taulukkolaskentapohjaisena toteutettu arvostamismalli, mikä huomio erilaiset kustannusrakenteet sekä laskee jälkihinnoittelussa tarvittavan painotetun keskihinnan.Toisena lopputuloksena saatiin tuotannon prosessikuvaukset. Jatkokehitystarpeita kustannuslaskennan osalta ilmeni työn myötä, joihin tullaan myös jatkossa puuttumaan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La collaboration CLIC (Compact LInear Collider, collisionneur linéaire compact) étudie la possibilité de réaliser un collisionneur électron-positon linéaire à haute énergie (3 TeV dans le centre de masse) et haute luminosité (1034 cm-2s-1), pour la recherche en physique des particules. Le projet CLIC se fonde sur l'utilisation de cavités accélératrices à haute fréquence (30 GHz). La puissance nécessaire à ces cavités est fournie par un faisceau d'électrons de basse énergie et de haute intensité, appelé faisceau de puissance, circulant parallèlement à l'accélérateur linéaire principal (procédé appelé « Accélération à Double Faisceau »). Dans ce schéma, un des principaux défis est la réalisation du faisceau de puissance, qui est d'abord généré dans un complexe accélérateur à basse fréquence, puis transformé pour obtenir une structure temporelle à haute fréquence nécessaire à l'alimentation des cavités accélératrices de l'accélérateur linéaire principal. La structure temporelle à haute fréquence des paquets d'électrons est obtenue par le procédé de multiplication de fréquence, dont la manipulation principale consiste à faire circuler le faisceau d'électrons dans un anneau isochrone en utilisant des déflecteurs radio-fréquence (déflecteurs RF) pour injecter et combiner les paquets d'électrons. Cependant, ce type de manipulation n'a jamais été réalisé auparavant et la première phase de la troisième installation de test pour CLIC (CLIC Test Facility 3 ou CTF3) a pour but la démonstration à faible charge du procédé de multiplication de fréquence par injection RF dans un anneau isochrone. Cette expérience, qui a été réalisée avec succès au CERN au cours de l'année 2002 en utilisant une version modifiée du pré-injecteur du grand collisionneur électron-positon LEP (Large Electron Positron), est le sujet central de ce rapport. L'expérience de combinaison des paquets d'électrons consiste à accélérer cinq impulsions dont les paquets d'électrons sont espacés de 10 cm, puis à les combiner dans un anneau isochrone pour obtenir une seule impulsion dont les paquets d'électrons sont espacés de 2 cm, multipliant ainsi la fréquence des paquets d'électrons, ainsi que la charge par impulsion, par cinq. Cette combinaison est réalisée au moyen de structures RF résonnantes sur un mode déflecteur, qui créent dans l'anneau une déformation locale et dépendante du temps de l'orbite du faisceau. Ce mécanisme impose plusieurs contraintes de dynamique de faisceau comme l'isochronicité, ainsi que des tolérances spécifiques sur les paquets d'électrons, qui sont définies dans ce rapport. Les études pour la conception de la Phase Préliminaire du CTF3 sont détaillées, en particulier le nouveau procédé d'injection avec les déflecteurs RF. Les tests de haute puissance réalisés sur ces cavités déflectrices avant leur installation dans l'anneau sont également décrits. L'activité de mise en fonctionnement de l'expérience est présentée en comparant les mesures faites avec le faisceau aux simulations et calculs théoriques. Finalement, les expériences de multiplication de fréquence des paquets d'électrons sont décrites et analysées. On montre qu'une très bonne efficacité de combinaison est possible après optimisation des paramètres de l'injection et des déflecteurs RF. En plus de l'expérience acquise sur l'utilisation de ces déflecteurs, des conclusions importantes pour les futures activités CTF3 et CLIC sont tirées de cette première démonstration de la multiplication de fréquence des paquets d'électrons par injection RF dans un anneau isochrone.<br/><br/>The Compact LInear Collider (CLIC) collaboration studies the possibility of building a multi-TeV (3 TeV centre-of-mass), high-luminosity (1034 cm-2s-1) electron-positron collider for particle physics. The CLIC scheme is based on high-frequency (30 GHz) linear accelerators powered by a low-energy, high-intensity drive beam running parallel to the main linear accelerators (Two-Beam Acceleration concept). One of the main challenges to realize this scheme is to generate the drive beam in a low-frequency accelerator and to achieve the required high-frequency bunch structure needed for the final acceleration. In order to provide bunch frequency multiplication, the main manipulation consists in sending the beam through an isochronous combiner ring using radio-frequency (RF) deflectors to inject and combine electron bunches. However, such a scheme has never been used before, and the first stage of the CLIC Test Facility 3 (CTF3) project aims at a low-charge demonstration of the bunch frequency multiplication by RF injection into an isochronous ring. This proof-of-principle experiment, which was successfully performed at CERN in 2002 using a modified version of the LEP (Large Electron Positron) pre-injector complex, is the central subject of this report. The bunch combination experiment consists in accelerating in a linear accelerator five pulses in which the electron bunches are spaced by 10 cm, and combining them in an isochronous ring to obtain one pulse in which the electron bunches are spaced by 2 cm, thus achieving a bunch frequency multiplication of a factor five, and increasing the charge per pulse by a factor five. The combination is done by means of RF deflecting cavities that create a time-dependent bump inside the ring, thus allowing the interleaving of the bunches of the five pulses. This process imposes several beam dynamics constraints, such as isochronicity, and specific tolerances on the electron bunches that are defined in this report. The design studies of the CTF3 Preliminary Phase are detailed, with emphasis on the novel injection process using RF deflectors. The high power tests performed on the RF deflectors prior to their installation in the ring are also reported. The commissioning activity is presented by comparing beam measurements to model simulations and theoretical expectations. Eventually, the bunch frequency multiplication experiments are described and analysed. It is shown that the process of bunch frequency multiplication is feasible with a very good efficiency after a careful optimisation of the injection and RF deflector parameters. In addition to the experience acquired in the operation of these RF deflectors, important conclusions for future CTF3 and CLIC activities are drawn from this first demonstration of the bunch frequency multiplication by RF injection into an isochronous ring.<br/><br/>La collaboration CLIC (Compact LInear Collider, collisionneur linéaire compact) étudie la possibilité de réaliser un collisionneur électron-positon linéaire à haute énergie (3 TeV) pour la recherche en physique des particules. Le projet CLIC se fonde sur l'utilisation de cavités accélératrices à haute fréquence (30 GHz). La puissance nécessaire à ces cavités est fournie par un faisceau d'électrons de basse énergie et de haut courant, appelé faisceau de puissance, circulant parallèlement à l'accélérateur linéaire principal (procédé appelé « Accélération à Double Faisceau »). Dans ce schéma, un des principaux défis est la réalisation du faisceau de puissance, qui est d'abord généré dans un complexe accélérateur à basse fréquence, puis transformé pour obtenir une structure temporelle à haute fréquence nécessaire à l'alimentation des cavités accélératrices de l'accélérateur linéaire principal. La structure temporelle à haute fréquence des paquets d'électrons est obtenue par le procédé de multiplication de fréquence, dont la manipulation principale consiste à faire circuler le faisceau d'électrons dans un anneau isochrone en utilisant des déflecteurs radio-fréquence (déflecteurs RF) pour injecter et combiner les paquets d'électrons. Cependant, ce type de manipulation n'a jamais été réalisé auparavant et la première phase de la troisième installation de test pour CLIC (CLIC Test Facility 3 ou CTF3) a pour but la démonstration à faible charge du procédé de multiplication de fréquence par injection RF dans un anneau isochrone. L'expérience consiste à accélérer cinq impulsions, puis à les combiner dans un anneau isochrone pour obtenir une seule impulsion dans laquelle la fréquence des paquets d'électrons et le courant sont multipliés par cinq. Cette combinaison est réalisée au moyen de structures déflectrices RF qui créent dans l'anneau une déformation locale et dépendante du temps de la trajectoire du faisceau. Les résultats de cette expérience, qui a été réalisée avec succès au CERN au cours de l?année 2002 en utilisant une version modifiée du pré-injecteur du grand collisionneur électron-positon LEP (Large Electron Positon), sont présentés en détail.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One aim of this study is to determine the impact of water velocity on the uptake of indicator polychlorinated biphenyls (iPCBs) by silicone rubber (SR) and low-density polyethylene (LDPE) passive samplers. A second aim is to assess the efficiency of performance reference compounds (PRCs) to correct for the impact of water velocity. SR and LDPE samplers were spiked with 11 or 12 PRCs and exposed for 6 weeks to four different velocities (in the range of 1.6 to 37.7 cm s−1) in river-like flow conditions using a channel system supplied with river water. A relationship between velocity and the uptakewas found for each iPCB and enables to determine expected changes in the uptake due to velocity variations. For both samplers, velocity increases from 2 to 10 cm s−1, 30 cm s−1 (interpolated data) and 100 cm s−1 (extrapolated data) lead to increases of the uptake which do not exceed a factor of 2, 3 and 4.5, respectively. Results also showed that the influence of velocity decreased with increasing the octanol-water coefficient partition (log Kow) of iPCBs when SR is used whereas the opposite effect was observed for LDPE. Time-weighted average (TWA) concentrations of iPCBs in water were calculated from iPCB uptake and PRC release. These calculations were performed using either a single PRC or all the PRCs. The efficiency of PRCs to correct the impact of velocity was assessed by comparing the TWA concentrations obtained at the four tested velocities. For SR, a good agreement was found among the four TWA concentrations with both methods (average RSD b 10%). Also for LDPE, PRCs offered a good correction of the impact of water velocity (average RSD of about 10 to 20%). These results contribute to the process of acceptance of passive sampling in routine regulatory monitoring programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years there has been growing interest in composite indicators as an efficient tool of analysis and a method of prioritizing policies. This paper presents a composite index of intermediary determinants of child health using a multivariate statistical approach. The index shows how specific determinants of child health vary across Colombian departments (administrative subdivisions). We used data collected from the 2010 Colombian Demographic and Health Survey (DHS) for 32 departments and the capital city, Bogotá. Adapting the conceptual framework of Commission on Social Determinants of Health (CSDH), five dimensions related to child health are represented in the index: material circumstances, behavioural factors, psychosocial factors, biological factors and the health system. In order to generate the weight of the variables, and taking into account the discrete nature of the data, principal component analysis (PCA) using polychoric correlations was employed in constructing the index. From this method five principal components were selected. The index was estimated using a weighted average of the retained components. A hierarchical cluster analysis was also carried out. The results show that the biggest differences in intermediary determinants of child health are associated with health care before and during delivery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a composite index of early childhood health using a multivariate statistical approach. The index shows how child health varies across Colombian departments, -administrative subdivisions-. In recent years there has been growing interest in composite indicators as an efficient analysis tool and a way of prioritizing policies. These indicators not only enable multi-dimensional phenomena to be simplified but also make it easier to measure, visualize, monitor and compare a country’s performance in particular issues. We used data collected from the Colombian Demographic and Health Survey, DHS, for 32 departments and the capital city, Bogotá, in 2005 and 2010. The variables included in the index provide a measure of three dimensions related to child health: health status, health determinants and the health system. In order to generate the weight of the variables and take into account the discrete nature of the data, we employed a principal component analysis, PCA, using polychoric correlation. From this method, five principal components were selected. The index was estimated using a weighted average of the components retained. A hierarchical cluster analysis was also carried out. We observed that the departments ranking in the lowest positions are located on the Colombian periphery. They are departments with low per capita incomes and they present critical social indicators. The results suggest that the regional disparities in child health may be associated with differences in parental characteristics, household conditions and economic development levels, which makes clear the importance of context in the study of child health in Colombia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Epidemiological evidence of the effects of long-term exposure to air pollu tion on the chronic processes of athero genesis is limited. Objective: We investigated the association of long-term exposure to traffic-related air pollu tion with subclinical atherosclerosis, measured by carotid intima media thickness (IMT) and ankle–brachial index (ABI). Methods: We performed a cross-sectional analysis using data collected during the reexamination (2007–2010) of 2,780 participants in the REGICOR (Registre Gironí del Cor: the Gerona Heart Register) study, a population-based prospective cohort in Girona, Spain. Long-term exposure across residences was calculated as the last 10 years’ time-weighted average of residential nitrogen dioxide (NO2) estimates (based on a local-scale land-use regression model), traffic intensity in the nearest street, and traffic intensity in a 100 m buffer. Associations with IMT and ABI were estimated using linear regression and multinomial logistic regression, respectively, controlling for sex, age, smoking status, education, marital status, and several other potential confounders or intermediates. Results: Exposure contrasts between the 5th and 95th percentiles for NO2 (25 μg/m), traffic intensity in the nearest street (15,000 vehicles/day), and traffic load within 100 m (7,200,000 vehicle-m/day) were associated with differences of 0.56% (95% CI: –1.5, 2.6%), 2.32% (95% CI: 0.48, 4.17%), and 1.91% (95% CI: –0.24, 4.06) percent difference in IMT, respectively. Exposures were positively associated with an ABI of > 1.3, but not an ABI of < 0.9. Stronger associations were observed among those with a high level of education and in men ≥ 60 years of age. Conclusions: Long-term traffic-related exposures were associated with subclinical markers of atherosclerosis. Prospective studies are needed to confirm associations and further examine differences among population subgroups.key words: ankle–brachial index, average daily traffic, cardiovascular disease, exposure assessment, exposure to tailpipe emissions, intima media thickness, land use regression model, Mediterranean diet, nitrogen dioxide

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study builds on a previous proposal for assigning probabilities to the outcomes computed using different primary indicators in single-case studies. These probabilities are obtained comparing the outcome to previously tabulated reference values and reflect the likelihood of the results in case there was no intervention effect. The current study explores how well different metrics are translated into p values in the context of simulation data. Furthermore, two published multiple baseline data sets are used to illustrate how well the probabilities could reflect the intervention effectiveness as assessed by the original authors. Finally, the importance of which primary indicator is used in each data set to be integrated is explored; two ways of combining probabilities are used: a weighted average and a binomial test. The results indicate that the translation into p values works well for the two nonoverlap procedures, with the results for the regression-based procedure diverging due to some undesirable features of its performance. These p values, both when taken individually and when combined, were well-aligned with the effectiveness for the real-life data. The results suggest that assigning probabilities can be useful for translating the primary measure into the same metric, using these probabilities as additional evidence on the importance of behavioral change, complementing visual analysis and professional's judgments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chironomidae spatial distribution was investigated at 63 near-pristine sites in 22 catchments of the Iberian Mediterranean coast. We used partial redundancy analysis to study Chironomidae community responses to a number of environmental factors acting at several spatial scales. The percentage of variation explained by local factors (23.3%) was higher than that explained by geographical (8.5%) or regional factors(8%). Catchment area, longitude, pH, % siliceous rocks in the catchment, and altitude were the best predictors of Chironomidae assemblages. We used a k-means cluster analysis to classified sites into 3 major groups based on Chironomidae assemblages. These groups were explained mainly by longitudinal zonation and geographical position, and were defined as 1) siliceous headwater streams, 2) mid-altitude streams with small catchment areas, and 3) medium-sized calcareous streams. Distinct species assemblages with associated indicator taxa were established for each stream category using IndVal analysis. Species responses to previously identified key environmental variables were determined, and optima and tolerances were established by weighted average regression. Distinct ecological requirements were observed among genera and among species of the same genus. Some genera were restricted to headwater systems (e.g., Diamesa), whereas others (e.g., Eukiefferiella) had wider ecological preferences but with distinct distributions among congenerics. In the present period of climate change, optima and tolerances of species might be a useful tool to predict responses of different species to changes in significant environmental variables, such as temperature and hydrology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tietotekniikan käyttö on tärkeää mikroyrityksen kasvun kannalta. Tutkielmassa pyrittiin toimintatutkimuksen keinoin löytämään kosmetiikan suoramyyntiä harjoittavan toiminimen KaunisSinä taustalla olevan osa-aikaisen yrittäjän asettamien tavoitteiden ja rajoitteiden mukaan paras ohjelmisto tukemaan asiakkuuden hallintaa. Ohjelmiston valintaa varten tutkittiin ohjelmistohankinnan menetelmiä kaupallisten valmisohjelmistojen, avoimen lähdekoodin ohjelmistojen ja räätälöityjen ohjelmistojen osalta. Yrittäjän toimintatapojen kartoituksen perusteella muodostettiin kriteerit ohjelmistojen vertailua ja valintaa varten. Vertailussa käytettiin painotetun keskiarvon menetelmää. Markkinoilla on saatavilla ominaisuuksiltaan sopivia avoimen lähdekoodin asiakkuuden hallintaohjelmistoja. Valinta on kompromissi ohjelmiston tarjoaman toiminnallisuuden sekä ominaisuuksien ja yritykselle muodostuneiden toimintatapojen välillä. Yrityksen on siis osittain mukautettava toimintatapojaan ohjelmiston mukaiseksi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Julkisen sektorin kiinteistötuotannossa toteutetaan kasvavassa määrin julkisen ja yksityisen sektorin yhteistyötä, jossa yksityinen yritys ottaa vastuun kiinteistöön liittyvistä suunnittelu-, rakentamis- ja ylläpitotoimista pitkäksi ajanjaksoksi. Työn tavoitteena on arvioida elinkaarimallilla toteuttavien kiinteistöhankkeiden toteutusvaihtoehdot, sekä kartoittaa elinkaarihankkeen sisältämiä riskejä hankkeen suunnitteluvaiheessa. Tutkimus suoritetaan kvalitatiivisena tutkimuksena, jota tuetaan tarvittavilla kvantitatiivista tietoa hyödyntävillä laskelmilla. Tutkimusaineistona käytetään aihepiiriin liittyviä akateemisia julkaisuja ja oppikirjoja. Työn toimeksiantajan YIT:n kautta tutkimukseen on saatu arvokasta asiantuntemusta ja materiaalia. Elinkaarihankkeiden hankintamenettelyksi suositellaan kilpailullista neuvottelumenettelyä. Kilpailullinen neuvottelumenettely mahdollistaa tilaajan ja palveluntuottajien välisen dialogin hankintamenettelyn aikana, jolloin osapuolet voivat yhteistyössä valita sopivimman toteutusmallin. Hankkeiden rahoitus voidaan toteuttaa tilaajan rahoituksella tai yksityisrahoituksella. Rahoituksen näkökulmasta tilaajan oma rahoitus on edullisin, mikä ei kuitenkaan vähennä elinkaarimallista saatavia etuja perinteiseen urakointiin verrattuna. Laskelman perusteella tilaajan rahoituksen ja yksityisrahoituksen keskimääräisessä korkotasossa (WACC) on eroa 2,8 %. Projektiyhtiön perustaminen tekee rahoituksesta entistä kalliimpaa, koska taseeseen kohdistuviin kustannuksiin lisätään projektiyhtiön oman pääoman tuottovaatimusta vastaava korko. Palveluntuottajan kannalta olennaiset riskit ovat rakennusvaiheen viivästyminen, kiinteistön käytettävyyden ja soveltuvuuden puutteista aiheutuvat muutostyöt. Palvelumaksun sitominen huonosti ylläpidon todellista kulurakennetta kuvaavaan elinkustannusindeksiin tulee pitkällä sopimuskaudella vähentämään palveluntuottajan toiminnan kannattavuutta. Kiinteistön ylläpidon kustannusindeksi on vuosina 2005–2011 noussut 2,3 % nopeammin vuodessa kuin elinkustannusindeksi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of no tillage system associated with the crop-livestock integration is an alternate managing that promotes the accumulation of dry matter in the soil, an essential fact to make the system sustainable and profitable. The aim of this study was to evaluate the operational performance of a planter-tractor set on maize straws intercropped with Urochloa, in different seeding modes. The soybean crop was seed on the intercropping of two forage species (Urochloa brizantha and Urochloa ruziziensis) in five cropping systems: MBL (Maize with Urochloa in the maize seeding row, mixed with base fertilizer and deposited at 0.10 m), MBE (Maize with Urochloa seeded between rows at the same day of seeding maize), MBC (Urochloa between rows of maize seeded with the covering fertilizer at the V4 stage), MBLA (Maize with Urochloa by broadcast seeding at the V4 stage ) and MS (Single Maize: control). The following variables were evaluated: dry mass of maize straw, dry mass of forages and total dry mass of straw; and for the operational parameters the speed of seeding, wheel slippage, traction force and average power at the drawbar. The results showed that the amount of straw produced by maize intercropping with Urochloa, interferes in the operational performance of the tractor-planter at the operation of soybean seeding, i.e., areas with higher amount of straw promote greater energy demand, as well as higher wheel slippage.