952 resultados para Non-response model approach
Resumo:
L’apport disproportionné aux taux de criminalité des membres des gangs de rue est, nul doute, une proposition empirique robuste. De nombreuses études ont conclu que l’association aux gangs de rue est un facteur de risque supplémentaire à celui que constitue déjà la fréquentation de pairs délinquants au nombre des meilleurs prédicteurs de la délinquance avec les antécédents criminels et les traits antisociaux de la personnalité. Pourtant, la contribution spécifique de l’association aux gangs de rue à l’explication de la délinquance est largement méconnue. Au nombre des variables les plus souvent citées pour l’expliquer figure néanmoins le concept de l’adhésion à la culture de gang qui n’a toutefois jamais été spécifiquement opérationnalisé. Le but de la thèse est d’étudier la contribution spécifique de l’adhésion d’un contrevenant à la culture des gangs de rue à l’explication de la délinquance. Plus précisément, elle a comme objectifs de définir la culture des gangs de rue, d’opérationnaliser l’adhésion à la culture des gangs de rue, d’examiner la fidélité de la mesure de l’adhésion à la culture de gang et d’étudier sa relation avec la nature, la variété et la fréquence des conduites délinquantes de contrevenants placés sous la responsabilité des centres jeunesse et des services correctionnels du Québec. Trois articles scientifiques, auxquels un chapitre régulier est joint, ont servi la démonstration de la thèse. D’abord, le premier article présente les démarches relatives au développement de la première Mesure de l’adhésion à la culture de gang, la MACg. Plus précisément, l’article présente la recension des écrits qui a permis de proposer une première définition de la culture de gang et d’opérationnaliser le concept. Il fait aussi état de la démarche de la validation de la pertinence de son contenu et des données préliminaires qui révèlent la très bonne cohérence interne de la MACg. Cette première étude est suivie de la présentation, dans le cadre d’un chapitre régulier, des résultats de l’examen de la cotation des principaux indicateurs de la culture de gang. Cette démarche constitue un complément nécessaire à l’examen de la validité apparente de la MACg. Les résultats révèlent des degrés de concordance très satisfaisants entre les observations de divers professionnels des centres jeunesse et des services correctionnels du Québec qui ont été invités à coter les indicateurs de la culture de gang à partir de deux histoires fictives d’un contrevenant mineur et d’un second d’âge adulte. Puis, le deuxième article présente les résultats d’un premier examen de la fidélité de la MACg à l’aide du modèle de Rasch de la Théorie de la réponse aux items. Ses résultats soutiennent l’unidimensionnalité de la MACg et sa capacité à distinguer des groupes d’items et de personnes le long d’un continuum de gravité d’adhésion à la culture de gang. Par contre, le fonctionnement différentiel et le mauvais ajustement de certains items sont observés, ainsi que l’inadéquation de la structure de réponses aux items (de type Likert) privilégiée lors de l’élaboration de la MACg. Une version réaménagée de cette dernière est donc proposée. Enfin, le troisième et dernier article présente les résultats de l’examen de la relation entre la délinquance et l’adhésion d’un contrevenant à la culture de gang telle que mesurée par la MACg. Les résultats soutiennent l’apport unique de l’adhésion d’un contrevenant à la culture de gang à la diversité et à la fréquence des conduites délinquantes auto-rapportées par des contrevenants placés sous la responsabilité des centres jeunesse et des services correctionnels du Québec. Le score à l’échelle originale et réaménagée de la MACg s’avère, d’ailleurs, un facteur explicatif plus puissant que l’âge, la précocité criminelle, les pairs délinquants et la psychopathie au nombre des meilleurs prédicteurs de la délinquance. L’étude met aussi en lumière l’étroite relation entre une forte adhésion à la culture de gang et la présence marquée de traits psychopathiques annonciatrice de problèmes particulièrement sérieux. Malgré ses limites, la thèse contribuera significativement aux développements des bases d’un nouveau modèle explicatif de l’influence de l’association aux gangs de rue sur les conduites des personnes. La MACg pourra aussi servir à l’évaluation des risques des hommes contrevenants placés sous la responsabilité du système de justice pénale et à l’amélioration de la qualité des interventions qui leur sont dédiées.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.
Resumo:
Este documento se centra en la presentación de información y análisis de la misma a la hora de establecer la manera en que empresas del sector de extracción de gas natural y generación de energía a base de dicho recurso, toman decisiones en cuanto a inversión, centrándose en la lógica que usan a la hora de emprender este proceso. Esto debido a la constante necesidad de establecer procesos que permitan tomar decisiones más acertadas, incluyendo todas las herramientas posibles para tal fin. La lógica es una de estas herramientas, pues permite encadenar factores con el fin de obtener resultados positivos. Por tal razón, se hace importante conocer el uso de esta herramienta, teniendo en cuentas de qué manera y en que contextos es usada. Con el fin de tener una mayor orientación, este estudio estará centrado en un sector específico, el cual es el de la extracción de petróleo y gas natural. Lo anterior entendiendo la necesidad existente de fundamentación teórica que permita establecer de manera clara la forma apropiada de tomar decisiones en un sector tan diverso y complejo como lo es el mencionado. El contexto empresarial actual exige una visión global, no basada en la lógica lineal causal que hoy se tiene como referencia. El sector de extracción de petróleo y gas natural es un ejemplo particular en cuanto a la manera en cuanto se toman decisiones en inversión, puesto que en su mayoría son empresas de capital intensivo, las cuales mantienen un flujo elevado de recursos monetarios.
Resumo:
This study proposes a new method for testing for the presence of momentum in nominal exchange rates, using a probabilistic approach. We illustrate our methodology estimating a binary response model using information on local currency / US dollar exchange rates of eight emerging economies. After controlling for important variables a§ecting the behavior of exchange rates in the short-run, we show evidence of exchange rate inertia; in other words, we Önd that exchange rate momentum is a common feature in this group of emerging economies, and thus foreign exchange traders participating in these markets are able to make excess returns by following technical analysis strategies. We Önd that the presence of momentum is asymmetric, being stronger in moments of currency depreciation than of appreciation. This behavior may be associated with central bank intervention
Resumo:
This paper presents the first systematic chronostratigraphic study of the river terraces of the Exe catchment in South West England and a new conceptual model for terrace formation in unglaciated basins with applicability to terrace staircase sequences elsewhere. The Exe catchment lay beyond the maximum extent of Pleistocene ice sheets and the drainage pattern evolved from the Tertiary to the Middle Pleistocene, by which time the major valley systems were in place and downcutting began to create a staircase of strath terraces. The higher terraces (8-6) typically exhibit altitudinal overlap or appear to be draped over the landscape, whilst the middle terraces show greater altitudinal separation and the lowest terraces are of a cut and fill form. The terrace deposits investigated in this study were deposited in cold phases of the glacial-interglacial Milankovitch climatic cycles with the lowest four being deposited in the Devensian Marine Isotope Stages (MIS) 4-2. A new cascade process-response model is proposed of basin terrace evolution in the Exe valley, which emphasises the role of lateral erosion in the creation of strath terraces and the reworking of inherited resistant lithological components down through the staircase. The resultant emergent valley topography and the reworking of artefacts along with gravel clasts, have important implications for the dating of hominin presence and the local landscapes they inhabited. Whilst the terrace chronology suggested here is still not as detailed as that for the Thames or the Solent System it does indicate a Middle Palaeolithic hominin presence in the region, probably prior to the late Wolstonian Complex or MIS 6. This supports existing data from cave sites in South West England.
Resumo:
In this work, a fault-tolerant control scheme is applied to a air handling unit of a heating, ventilation and air-conditioning system. Using the multiple-model approach it is possible to identify faults and to control the system under faulty and normal conditions in an effective way. Using well known techniques to model and control the process, this work focuses on the importance of the cost function in the fault detection and its influence on the reconfigurable controller. Experimental results show how the control of the terminal unit is affected in the presence a fault, and how the recuperation and reconfiguration of the control action is able to deal with the effects of faults.
Resumo:
Background: The aim of this study was to evaluate stimulant medication response following a single dose of methylphenidate (MPH) in children and young people with hyperkinetic disorder using infrared motion analysis combined with a continuous performance task (QbTest system) as objective measures. The hypothesis was put forward that a moderate testdose of stimulant medication could determine a robust treatment response, partial response and non-response in relation to activity, attention and impulse control measures. Methods: The study included 44 children and young people between the ages of 7-18 years with a diagnosis of hyperkinetic disorder (F90 & F90.1). A single dose-protocol incorporated the time course effects of both immediate release MPH and extended release MPH (Concerta XL, Equasym XL) to determine comparable peak efficacy periods post intake. Results: A robust treatment response with objective measures reverting to the population mean was found in 37 participants (84%). Three participants (7%) demonstrated a partial response to MPH and four participants (9%) were determined as non-responders due to deteriorating activity measures together with no improvements in attention and impulse control measures. Conclusion: Objective measures provide early into prescribing the opportunity to measure treatment response and monitor adverse reactions to stimulant medication. Most treatment responders demonstrated an effective response to MPH on a moderate testdose facilitating a swift and more optimal titration process.
Acute effects of meal fatty acid composition on insulin sensitivity in healthy post-menopausal women
Resumo:
Postprandial plasma insulin concentrations after a single high-fat meal may be modified by the presence of specific fatty acids although the effects of sequential meal ingestion are unknown. The aim of the present study was to examine the effects of altering the fatty acid composition in a single mixed fat-carbohydrate meal on glucose metabolism and insulin sensitivity of a second meal eaten 5 h later. Insulin sensitivity was assessed using a minimal model approach. Ten healthy post-menopausal women underwent four two-meal studies in random order. A high-fat breakfast (40 g fat) where the fatty acid composition was predominantly saturated fatty acids (SFA), n-6 polyunsaturated fatty acids (PUFA), long-chain n-3 PUFA or monounsaturated fatty acids (MUFA) was followed 5 h later by a low-fat, high-carbohydrate lunch (5.7 g fat), which was identical in all four studies. The plasma insulin response was significantly higher following the SFA meal than the other meals after both breakfast and lunch (P<0.006) although there was no effect of breakfast fatty acid composition on plasma glucose concentrations. Postprandial insulin sensitivity (SI(Oral)) was assessed for 180 min after each meal. SI(Oral) was significantly lower after lunch than after breakfast for all four test meals (P=0.019) following the same rank order (SFA < n-6 PUFA < n-3 PUFA < MUFA) for each meal. The present study demonstrates that a single meal rich in SFA reduces postprandial insulin sensitivity with 'carry-over' effects for the next meal.
Resumo:
This paper investigates whether obtaining sustainable building certification entails a rental premium for commercial office buildings and tracks its development over time. To this aim, both a difference-in-differences and a fixed-effects model approach are applied to a large panel dataset of office buildings in the United States in the 2000–2010 period. The results indicate a significant rental premium for both ENERGY STAR and LEED certified buildings. Controlling for confounding factors, this premium is shown to have increased steadily from 2006 to 2008, followed by a moderate decline in the subsequent periods. The results also show a significant positive relationship between ENERGY STAR labeling and building occupancy rates.
Resumo:
Purpose – This study aims to provide a review of brownfield policy and the emerging sustainable development agenda in the UK, and to examine the development industry’s (both commercial and residential) role and attitudes towards brownfield regeneration and contaminated land. Design/methodology/approach – The paper analyses results from a two-stage survey of commercial and residential developers carried out in mid-2004, underpinned by structured interviews with 11 developers. Findings – The results suggest that housebuilding on brownfield is no longer the preserve of specialists, and is now widespread throughout the industry in the UK. The redevelopment of contaminated sites for residential use could be threatened by the impact of the EU Landfill Directive. The findings also suggest that developers are not averse to developing on contaminated sites, although post-remediation stigma remains an issue. The market for warranties and insurance continues to evolve. Research limitations/implications – The survey is based on a sample which represents nearly 30 per cent of UK volume housebuilding. Although the response in the smaller developer groups was relatively under-represented, non-response bias was not found to be a significant issue. More research is needed to assess the way in which developers approach brownfield regeneration at a local level. Practical implications – The research suggests that clearer Government guidance in the UK is needed on how to integrate concepts of sustainability in brownfield development and that EU policy, which has been introduced for laudable aims, is creating tensions within the development industry. There may be an emphasis towards greenfield development in the future, as the implications of the Barker review are felt. Originality/value – This is a national survey of developers’ attitudes towards brownfield development in the UK, following the Barker Review, and highlights key issues in UK and EU policy layers. Keywords Brownfield sites, Contamination Paper type Research paper
Resumo:
Purpose – This study aims to examine the moderating effects of external environment and organisational structure in the relationship between business-level strategy and organisational performance. Design/methodology/approach – The focus of the study is on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors, and respondents were CEOs. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and appropriate measures taken to minimise the impact of common method variance (CMV). Findings – The results indicate that environmental dynamism and hostility act as moderators in the relationship between business-level strategy and relative competitive performance. In low-hostility environments a cost-leadership strategy and in high-hostility environments a differentiation strategy lead to better performance compared with competitors. In highly dynamic environments a cost-leadership strategy and in low dynamism environments a differentiation strategy are more helpful in improving financial performance. Organisational structure moderates the relationship of both the strategic types with ROS. However, in the case of ROA, the moderating effect of structure was found only in its relationship with cost-leadership strategy. A mechanistic structure is helpful in improving the financial performance of organisations adopting either a cost-leadership or a differentiation strategy. Originality/value – Unlike many other empirical studies, the study makes an important contribution to the literature by examining the moderating effects of both environment and structure on the relationship between business-level strategy and performance in a detailed manner, using moderated regression analysis.
Resumo:
Purpose – The purpose of this study is to examine the relationship between business-level strategy and organisational performance and to test the applicability of Porter's generic strategies in explaining differences in the performance of organisations. Design/methodology/approach – The study was focussed on manufacturing firms in the UK belonging to the electrical and mechanical engineering sectors. Data were collected through a postal survey using the survey instrument from 124 organisations and the respondents were all at CEO level. Both objective and subjective measures were used to assess performance. Non-response bias was assessed statistically and it was not found to be a major problem affecting this study. Appropriate measures were taken to ensure that common method variance (CMV) does not affect the results of this study. Statistical tests indicated that CMV problem does not affect the results of this study. Findings – The results of this study indicate that firms adopting one of the strategies, namely cost-leadership or differentiation, perform better than “stuck-in-the-middle” firms which do not have a dominant strategic orientation. The integrated strategy group has lower performance compared with cost-leaders and differentiators in terms of financial performance measures. This provides support for Porter's view that combination strategies are unlikely to be effective in organisations. However, the cost-leadership and differentiation strategies were not strongly correlated with the financial performance measures indicating the limitations of Porter's generic strategies in explaining performance heterogeneity in organisations. Originality/value – This study makes an important contribution to the literature by identifying some of the gaps in the literature through a systematic literature review and addressing those gaps.
Resumo:
A mesoscale meteorological model (FOOT3DK) is coupled with a gas exchange model to simulate surface fluxes of CO2 and H2O under field conditions. The gas exchange model consists of a C3 single leaf photosynthesis sub-model and an extended big leaf (sun/shade) sub-model that divides the canopy into sunlit and shaded fractions. Simulated CO2 fluxes of the stand-alone version of the gas exchange model correspond well to eddy-covariance measurements at a test site in a rural area in the west of Germany. The coupled FOOT3DK/gas exchange model is validated for the diurnal cycle at singular grid points, and delivers realistic fluxes with respect to their order of magnitude and to the general daily course. Compared to the Jarvis-based big leaf scheme, simulations of latent heat fluxes with a photosynthesis-based scheme for stomatal conductance are more realistic. As expected, flux averages are strongly influenced by the underlying land cover. While the simulated net ecosystem exchange is highly correlated with leaf area index, this correlation is much weaker for the latent heat flux. Photosynthetic CO2 uptake is associated with transpirational water loss via the stomata, and the resulting opposing surface fluxes of CO2 and H2O are reproduced with the model approach. Over vegetated surfaces it is shown that the coupling of a photosynthesis-based gas exchange model with the land-surface scheme of a mesoscale model results in more realistic simulated latent heat fluxes.
Resumo:
A range of possible changes in the frequency and characteristics of European wind storms under future climate conditions was investigated on the basis of a multi-model ensemble of 9 coupled global climate model (GCM) simulations for the 20th and 21st centuries following the IPCC SRES A1B scenario. A multi-model approach allowed an estimation of the (un)certainties of the climate change signals. General changes in large-scale atmospheric flow were analysed, the occurrence of wind storms was quantified, and atmospheric features associated with wind storm events were considered. Identified storm days were investigated according to atmospheric circulation, associated pressure patterns, cyclone tracks and wind speed patterns. Validation against reanalysis data revealed that the GCMs are in general capable of realistically reproducing characteristics of European circulation weather types (CWTs) and wind storms. Results are given with respect to frequency of occurrence, storm-associated flow conditions, cyclone tracks and specific wind speed patterns. Under anthropogenic climate change conditions (SRES A1B scenario), increased frequency of westerly flow during winter is detected over the central European investigation area. In the ensemble mean, the number of detected wind storm days increases between 19 and 33% for 2 different measures of storminess, only 1 GCM revealed less storm days. The increased number of storm days detected in most models is disproportionately high compared to the related CWT changes. The mean intensity of cyclones associated with storm days in the ensemble mean increases by about 10 (±10)% in the Eastern Atlantic, near the British Isles and in the North Sea. Accordingly, wind speeds associated with storm events increase significantly by about 5 (±5)% over large parts of central Europe, mainly on days with westerly flow. The basic conclusions of this work remain valid if different ensemble contructions are considered, leaving out an outlier model or including multiple runs of one particular model.