927 resultados para Milroy, Lesley: Sociolinguistics : method and interpretation
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Among the external manifestations of scoliosis, the rib hump, which is associated with the ribs' deformities and rotations, constitutes the most disturbing aspect of the scoliotic deformity for patients. A personalized 3-D model of the rib cage is important for a better evaluation of the deformity, and hence, a better treatment planning. A novel method for the 3-D reconstruction of the rib cage, based only on two standard radiographs, is proposed in this paper. For each rib, two points are extrapolated from the reconstructed spine, and three points are reconstructed by stereo radiography. The reconstruction is then refined using a surface approximation. The method was evaluated using clinical data of 13 patients with scoliosis. A comparison was conducted between the reconstructions obtained with the proposed method and those obtained by using a previous reconstruction method based on two frontal radiographs. A first comparison criterion was the distances between the reconstructed ribs and the surface topography of the trunk, considered as the reference modality. The correlation between ribs axial rotation and back surface rotation was also evaluated. The proposed method successfully reconstructed the ribs of the 6th-12th thoracic levels. The evaluation results showed that the 3-D configuration of the new rib reconstructions is more consistent with the surface topography and provides more accurate measurements of ribs axial rotation.
Resumo:
The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.
Resumo:
It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Interpretation of ambiguity is consistently associated with anxiety in children, however, the temporal relationship between interpretation and anxiety remains unclear as do the developmental origins of interpretative biases. This study set out to test a model of the development of interpretative biases in a prospective study of 110 children aged 5–9 years of age. Children and their parents were assessed three times, annually, on measures of anxiety and interpretation of ambiguous scenarios (including, for parents, both their own interpretations and their expectations regarding their child). Three models were constructed to assess associations between parent and child anxiety and threat and distress cognitions and expectancies. The three models were all a reasonable fit of the data, and supported conclusions that: (i) children’s threat and distress cognitions were stable over time and were significantly associated with anxiety, (ii) parents’ threat and distress cognitions and expectancies significantly predicted child threat cognitions at some time points, and (iii) parental anxiety significantly predicted parents cognitions, which predicted parental expectancies at some time points. Parental expectancies were also significantly predicted by child cognitions. The findings varied depending on assessment time point and whether threat or distress cognitions were being considered. The findings support the notion that child and parent cognitive processes, in particular parental expectations, may be a useful target in the treatment or prevention of anxiety disorders in children.
Resumo:
The United Nation Intergovernmental Panel on Climate Change (IPCC) makes it clear that climate change is due to human activities and it recognises buildings as a distinct sector among the seven analysed in its 2007 Fourth Assessment Report. Global concerns have escalated regarding carbon emissions and sustainability in the built environment. The built environment is a human-made setting to accommodate human activities, including building and transport, which covers an interdisciplinary field addressing design, construction, operation and management. Specifically, Sustainable Buildings are expected to achieve high performance throughout the life-cycle of siting, design, construction, operation, maintenance and demolition, in the following areas: • energy and resource efficiency; • cost effectiveness; • minimisation of emissions that negatively impact global warming, indoor air quality and acid rain; • minimisation of waste discharges; and • maximisation of fulfilling the requirements of occupants’ health and wellbeing. Professionals in the built environment sector, for example, urban planners, architects, building scientists, engineers, facilities managers, performance assessors and policy makers, will play a significant role in delivering a sustainable built environment. Delivering a sustainable built environment needs an integrated approach and so it is essential for built environment professionals to have interdisciplinary knowledge in building design and management . Building and urban designers need to have a good understanding of the planning, design and management of the buildings in terms of low carbon and energy efficiency. There are a limited number of traditional engineers who know how to design environmental systems (services engineer) in great detail. Yet there is a very large market for technologists with multi-disciplinary skills who are able to identify the need for, envision and manage the deployment of a wide range of sustainable technologies, both passive (architectural) and active (engineering system),, and select the appropriate approach. Employers seek applicants with skills in analysis, decision-making/assessment, computer simulation and project implementation. An integrated approach is expected in practice, which encourages built environment professionals to think ‘out of the box’ and learn to analyse real problems using the most relevant approach, irrespective of discipline. The Design and Management of Sustainable Built Environment book aims to produce readers able to apply fundamental scientific research to solve real-world problems in the general area of sustainability in the built environment. The book contains twenty chapters covering climate change and sustainability, urban design and assessment (planning, travel systems, urban environment), urban management (drainage and waste), buildings (indoor environment, architectural design and renewable energy), simulation techniques (energy and airflow), management (end-user behaviour, facilities and information), assessment (materials and tools), procurement, and cases studies ( BRE Science Park). Chapters one and two present general global issues of climate change and sustainability in the built environment. Chapter one illustrates that applying the concepts of sustainability to the urban environment (buildings, infrastructure, transport) raises some key issues for tackling climate change, resource depletion and energy supply. Buildings, and the way we operate them, play a vital role in tackling global greenhouse gas emissions. Holistic thinking and an integrated approach in delivering a sustainable built environment is highlighted. Chapter two demonstrates the important role that buildings (their services and appliances) and building energy policies play in this area. Substantial investment is required to implement such policies, much of which will earn a good return. Chapters three and four discuss urban planning and transport. Chapter three stresses the importance of using modelling techniques at the early stage for strategic master-planning of a new development and a retrofit programme. A general framework for sustainable urban-scale master planning is introduced. This chapter also addressed the needs for the development of a more holistic and pragmatic view of how the built environment performs, , in order to produce tools to help design for a higher level of sustainability and, in particular, how people plan, design and use it. Chapter four discusses microcirculation, which is an emerging and challenging area which relates to changing travel behaviour in the quest for urban sustainability. The chapter outlines the main drivers for travel behaviour and choices, the workings of the transport system and its interaction with urban land use. It also covers the new approach to managing urban traffic to maximise economic, social and environmental benefits. Chapters five and six present topics related to urban microclimates including thermal and acoustic issues. Chapter five discusses urban microclimates and urban heat island, as well as the interrelationship of urban design (urban forms and textures) with energy consumption and urban thermal comfort. It introduces models that can be used to analyse microclimates for a careful and considered approach for planning sustainable cities. Chapter six discusses urban acoustics, focusing on urban noise evaluation and mitigation. Various prediction and simulation methods for sound propagation in micro-scale urban areas, as well as techniques for large scale urban noise-mapping, are presented. Chapters seven and eight discuss urban drainage and waste management. The growing demand for housing and commercial developments in the 21st century, as well as the environmental pressure caused by climate change, has increased the focus on sustainable urban drainage systems (SUDS). Chapter seven discusses the SUDS concept which is an integrated approach to surface water management. It takes into consideration quality, quantity and amenity aspects to provide a more pleasant habitat for people as well as increasing the biodiversity value of the local environment. Chapter eight discusses the main issues in urban waste management. It points out that population increases, land use pressures, technical and socio-economic influences have become inextricably interwoven and how ensuring a safe means of dealing with humanity’s waste becomes more challenging. Sustainable building design needs to consider healthy indoor environments, minimising energy for heating, cooling and lighting, and maximising the utilisation of renewable energy. Chapter nine considers how people respond to the physical environment and how that is used in the design of indoor environments. It considers environmental components such as thermal, acoustic, visual, air quality and vibration and their interaction and integration. Chapter ten introduces the concept of passive building design and its relevant strategies, including passive solar heating, shading, natural ventilation, daylighting and thermal mass, in order to minimise heating and cooling load as well as energy consumption for artificial lighting. Chapter eleven discusses the growing importance of integrating Renewable Energy Technologies (RETs) into buildings, the range of technologies currently available and what to consider during technology selection processes in order to minimise carbon emissions from burning fossil fuels. The chapter draws to a close by highlighting the issues concerning system design and the need for careful integration and management of RETs once installed; and for home owners and operators to understand the characteristics of the technology in their building. Computer simulation tools play a significant role in sustainable building design because, as the modern built environment design (building and systems) becomes more complex, it requires tools to assist in the design process. Chapter twelve gives an overview of the primary benefits and users of simulation programs, the role of simulation in the construction process and examines the validity and interpretation of simulation results. Chapter thirteen particularly focuses on the Computational Fluid Dynamics (CFD) simulation method used for optimisation and performance assessment of technologies and solutions for sustainable building design and its application through a series of cases studies. People and building performance are intimately linked. A better understanding of occupants’ interaction with the indoor environment is essential to building energy and facilities management. Chapter fourteen focuses on the issue of occupant behaviour; principally, its impact, and the influence of building performance on them. Chapter fifteen explores the discipline of facilities management and the contribution that this emerging profession makes to securing sustainable building performance. The chapter highlights a much greater diversity of opportunities in sustainable building design that extends well into the operational life. Chapter sixteen reviews the concepts of modelling information flows and the use of Building Information Modelling (BIM), describing these techniques and how these aspects of information management can help drive sustainability. An explanation is offered concerning why information management is the key to ‘life-cycle’ thinking in sustainable building and construction. Measurement of building performance and sustainability is a key issue in delivering a sustainable built environment. Chapter seventeen identifies the means by which construction materials can be evaluated with respect to their sustainability. It identifies the key issues that impact the sustainability of construction materials and the methodologies commonly used to assess them. Chapter eighteen focuses on the topics of green building assessment, green building materials, sustainable construction and operation. Commonly-used assessment tools such as BRE Environmental Assessment Method (BREEAM), Leadership in Energy and Environmental Design ( LEED) and others are introduced. Chapter nineteen discusses sustainable procurement which is one of the areas to have naturally emerged from the overall sustainable development agenda. It aims to ensure that current use of resources does not compromise the ability of future generations to meet their own needs. Chapter twenty is a best-practice exemplar - the BRE Innovation Park which features a number of demonstration buildings that have been built to the UK Government’s Code for Sustainable Homes. It showcases the very latest innovative methods of construction, and cutting edge technology for sustainable buildings. In summary, Design and Management of Sustainable Built Environment book is the result of co-operation and dedication of individual chapter authors. We hope readers benefit from gaining a broad interdisciplinary knowledge of design and management in the built environment in the context of sustainability. We believe that the knowledge and insights of our academics and professional colleagues from different institutions and disciplines illuminate a way of delivering sustainable built environment through holistic integrated design and management approaches. Last, but not least, I would like to take this opportunity to thank all the chapter authors for their contribution. I would like to thank David Lim for his assistance in the editorial work and proofreading.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
A method and oligonucleotide compound for inhibiting replication of a nidovirus in virus-infected animal cells are disclosed. The compound (i) has a nuclease-resistant backbone, (ii) is capable of uptake by the infected cells, (iii) contains between 8-25 nucleotide bases, and (iv) has a sequence capable of disrupting base pairing between the transcriptional regulatory sequences in the 5′ leader region of the positive-strand viral genome and negative-strand 3′ subgenomic region. In practicing the method, infected cells are exposed to the compound in an amount effective to inhibit viral replication.
Resumo:
Although interpretation bias has been associated with the development and/or maintenance of childhood anxiety, its origins remain unclear. The present study is the first to examine intergenerational transmission of this bias from parents to their preschool-aged children via the verbal information pathway. A community sample of fifty parent–child pairs was recruited. Parents completed measures of their own trait anxiety and interpretation bias, their child’s anxiety symptoms, and a written story-stem measure, to capture the way parents tell their children stories. Interpretation bias was assessed in preschool-aged children (aged between 2 years 7 months and 5 years 8 months) using an extended story-stem paradigm. Young children’s interpretation bias was not significantly associated with their own anxiety symptoms. Neither was there evidence for a significant association between parent and child interpretation bias. However, parents who reported they would tell their child one or more threatening story endings in the written story-stem task had significantly higher anxiety than those who did not include any threatening story endings. In turn, children whose parents did not include any threatening endings in their written stories had significantly lower threat interpretations on the child story-stem paradigm, compared to those with parents who included at least one threatening story ending. The results suggest that parental verbal information could play a role in the development of interpretation bias in young children.
Resumo:
Background: Theory and treatment of anxiety disorders in young people are commonly based on the premise that interpretation biases found in anxious adults are also found in children and adolescents. Although there is some evidence that this may be the case, studies have not typically taken age into account, which is surprising given the normative changes in cognition that occur throughout childhood. The aim of the current study was to identify whether associations between anxiety disorder status and interpretation biases differed in children and adolescents. Methods: The responses of children (7-10 years) and adolescents (13-16 years) with and without anxiety disorders (n = 120) were compared on an ambiguous scenarios task. Results: Children and adolescents with an anxiety disorder showed significantly higher levels of threat interpretation and avoidant strategies than non-anxious children and adolescents. However, age significantly moderated the effect of anxiety disorder status on interpretation of ambiguity, in that adolescents with anxiety disorders showed significantly higher levels of threat interpretation and associated negative emotion than non-anxious adolescents, but a similar relationship was not observed among children. Conclusions: The findings suggest that theoretical accounts of interpretation biases in anxiety disorders in children and adolescents should distinguish between different developmental periods. For both ages, treatment that targets behavioral avoidance appears warranted. However, while adolescents are likely to benefit from treatment that addresses interpretation biases, there may be limited benefit for children under the age of ten.
Resumo:
Electrical methods of geophysical survey are known to produce results that are hard to predict at different times of the year, and under differing weather conditions. This is a problem which can lead to misinterpretation of archaeological features under investigation. The dynamic relationship between a ‘natural’ soil matrix and an archaeological feature is a complex one, which greatly affects the success of the feature’s detection when using active electrical methods of geophysical survey. This study has monitored the gradual variation of measured resistivity over a selection of study areas. By targeting difficult to find, and often ‘missing’ electrical anomalies of known archaeological features, this study has increased the understanding of both the detection and interpretation capabilities of such geophysical surveys. A 16 month time-lapse study over 4 archaeological features has taken place to investigate the aforementioned detection problem across different soils and environments. In addition to the commonly used Twin-Probe earth resistance survey, electrical resistivity imaging (ERI) and quadrature electro-magnetic induction (EMI) were also utilised to explore the problem. Statistical analyses have provided a novel interpretation, which has yielded new insights into how the detection of archaeological features is influenced by the relationship between the target feature and the surrounding ‘natural’ soils. The study has highlighted both the complexity and previous misconceptions around the predictability of the electrical methods. The analysis has confirmed that each site provides an individual and nuanced situation, the variation clearly relating to the composition of the soils (particularly pore size) and the local weather history. The wide range of reasons behind survey success at each specific study site has been revealed. The outcomes have shown that a simplistic model of seasonality is not universally applicable to the electrical detection of archaeological features. This has led to the development of a method for quantifying survey success, enabling a deeper understanding of the unique way in which each site is affected by the interaction of local environmental and geological conditions.
Resumo:
P>This study aimed to verify the effect of modified section method and laser-welding on the accuracy of fit of ill-fitting commercially pure titanium (cp Ti) and Ni-Cr alloy one-piece cast frameworks. Two sets of similar implant-supported frameworks were constructed. Both groups of six 3-unit implant-supported fixed partial dentures were cast as one-piece [I: Ni-Cr (control) and II: cp Ti] and evaluated for passive fitting in an optical microscope with both screws tightened and with only one screw tightened. All frameworks were then sectioned in the diagonal axis at the pontic region (III: Ni-Cr and IV: cp Ti). Sectioned frameworks were positioned in the matrix (10-Ncm torque) and laser-welded. Passive fitting was evaluated for the second time. Data were submitted to anova and Tukey-Kramer honestly significant difference tests (P < 0 center dot 05). With both screws tightened, one-piece cp Ti group II showed significantly higher misfit values (27 center dot 57 +/- 5 center dot 06 mu m) than other groups (I: 11 center dot 19 +/- 2 center dot 54 mu m, III: 12 center dot 88 +/- 2 center dot 93 mu m, IV: 13 center dot 77 +/- 1 center dot 51 mu m) (P < 0 center dot 05). In the single-screw-tightened test, with readings on the opposite side to the tightened side, Ni-Cr cast as one-piece (I: 58 center dot 66 +/- 14 center dot 30 mu m) was significantly different from cp Ti group after diagonal section (IV: 27 center dot 51 +/- 8 center dot 28 mu m) (P < 0 center dot 05). On the tightened side, no significant differences were found between groups (P > 0 center dot 05). Results showed that diagonally sectioning ill-fitting cp Ti frameworks lowers misfit levels of prosthetic implant-supported frameworks and also improves passivity levels of the same frameworks when compared to one-piece cast structures.
Resumo:
A new approach for solving the optimal power flow (OPF) problem is established by combining the reduced gradient method and the augmented Lagrangian method with barriers and exploring specific characteristics of the relations between the variables of the OPF problem. Computer simulations on IEEE 14-bus and IEEE 30-bus test systems illustrate the method. (c) 2007 Elsevier Inc. All rights reserved.