955 resultados para Gordon, Matthew: Sociolinguistics : method and interpretation
Resumo:
In the last two centuries, papers have been published including measurements of the germination process. High diversity of mathematical expressions has made comparisons between papers and some times the interpretation of results difficult. Thus, in this paper is included a review about measurements of the germination process, with an analysis of the several mathematical expressions included in the specific literature, recovering the history, sense, and limitations of some germination measurements. Among the measurements included in this paper are the germinability, germination time, coefficient of uniformity of germination (CUG), coefficient of variation of the germination time (CVt), germination rate (mean rate, weighted mean rate, coefficient of velocity, germination rate of George, Timsons index, GV or Czabators index; Throneberry and Smiths method and its adaptations, including Maguires rate; ERI or emergence rate index, germination index, and its modifications), uncertainty associated to the distribution of the relative frequency of germination (U), and synchronization index (Z). The limits of the germination measurements were included to make the interpretation and decisions during comparisons easier. Time, rate, homogeneity, and synchrony are aspects that can be measured, informing the dynamics of the germination process. These characteristics are important not only for physiologists and seed technologists, but also for ecologists because it is possible to predict the degree of successful of a species based on the capacity of their harvest seed to spread the germination through time, permitting the recruitment in the environment of some part of the seedlings formed.
Resumo:
The objectives of the present study were 1) to compare results obtained by the traditional manual method of measuring heart rate (HR) and heart rate response (HRR) to the Valsalva maneuver, standing and deep breathing, with those obtained using a computerized data analysis system attached to a standard electrocardiograph machine; 2) to standardize the responses of healthy subjects to cardiovascular tests, and 3) to evaluate the response to these tests in a group of patients with diabetes mellitus (DM). In all subjects (97 healthy and 143 with DM) we evaluated HRR to deep breathing, HRR to standing, HRR to the Valsalva maneuver, and blood pressure response (BPR) to standing up and to a sustained handgrip. Since there was a strong positive correlation between the results obtained with the computerized method and the traditional method, we conclude that the new method can replace the traditional manual method for evaluating cardiovascular responses with the advantages of speed and objectivity. HRR and BPR of men and women did not differ. A correlation between age and HRR was observed for standing (r = -0.48, P<0.001) and deep breathing (r = -0.41, P<0.002). Abnormal BPR to standing was usually observed only in diabetic patients with definite and severe degrees of autonomic neuropathy.
Resumo:
It is common practice to initiate supplemental feeding in newborns if body weight decreases by 7-10% in the first few days after birth (7-10% rule). Standard hospital procedure is to initiate intravenous therapy once a woman is admitted to give birth. However, little is known about the relationship between intrapartum intravenous therapy and the amount of weight loss in the newborn. The present research was undertaken in order to determine what factors contribute to weight loss in a newborn, and to examine the relationship between the practice of intravenous intrapartum therapy and the extent of weight loss post-birth. Using a cross-sectional design with a systematic random sample of 100 mother-baby dyads, we examined properties of delivery that have the potential to impact weight loss in the newborn, including method of delivery, parity, duration of labour, volume of intravenous therapy, feeding method, and birth attendant. This study indicated that the volume of intravenous therapy and method of delivery are significant predictors of weight loss in the newborn (R2=15.5, p<0.01). ROC curve analysis identified an intravenous volume cut-point of 1225 ml that would elicit a high measure of sensitivity (91.3%), and demonstrated significant Kappa agreement (p<0.01) with excess newborn weight loss. It was concluded that infusion of intravenous therapy and natural birth delivery are discriminant factors that influence excess weight loss in newborn infants. Acknowledgement of these factors should be considered in clinical practice.
Resumo:
Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.
Resumo:
Among the external manifestations of scoliosis, the rib hump, which is associated with the ribs' deformities and rotations, constitutes the most disturbing aspect of the scoliotic deformity for patients. A personalized 3-D model of the rib cage is important for a better evaluation of the deformity, and hence, a better treatment planning. A novel method for the 3-D reconstruction of the rib cage, based only on two standard radiographs, is proposed in this paper. For each rib, two points are extrapolated from the reconstructed spine, and three points are reconstructed by stereo radiography. The reconstruction is then refined using a surface approximation. The method was evaluated using clinical data of 13 patients with scoliosis. A comparison was conducted between the reconstructions obtained with the proposed method and those obtained by using a previous reconstruction method based on two frontal radiographs. A first comparison criterion was the distances between the reconstructed ribs and the surface topography of the trunk, considered as the reference modality. The correlation between ribs axial rotation and back surface rotation was also evaluated. The proposed method successfully reconstructed the ribs of the 6th-12th thoracic levels. The evaluation results showed that the 3-D configuration of the new rib reconstructions is more consistent with the surface topography and provides more accurate measurements of ribs axial rotation.
Resumo:
The finite element method (FEM) is now developed to solve two-dimensional Hartree-Fock (HF) equations for atoms and diatomic molecules. The method and its implementation is described and results are presented for the atoms Be, Ne and Ar as well as the diatomic molecules LiH, BH, N_2 and CO as examples. Total energies and eigenvalues calculated with the FEM on the HF-level are compared with results obtained with the numerical standard methods used for the solution of the one dimensional HF equations for atoms and for diatomic molecules with the traditional LCAO quantum chemical methods and the newly developed finite difference method on the HF-level. In general the accuracy increases from the LCAO - to the finite difference - to the finite element method.
Resumo:
It is well known that regression analyses involving compositional data need special attention because the data are not of full rank. For a regression analysis where both the dependent and independent variable are components we propose a transformation of the components emphasizing their role as dependent and independent variables. A simple linear regression can be performed on the transformed components. The regression line can be depicted in a ternary diagram facilitating the interpretation of the analysis in terms of components. An exemple with time-budgets illustrates the method and the graphical features
Resumo:
We consider boundary value problems for the elliptic sine-Gordon equation posed in the half plane y > 0. This problem was considered in Gutshabash and Lipovskii (1994 J. Math. Sci. 68 197–201) using the classical inverse scattering transform approach. Given the limitations of this approach, the results obtained rely on a nonlinear constraint on the spectral data derived heuristically by analogy with the linearized case. We revisit the analysis of such problems using a recent generalization of the inverse scattering transform known as the Fokas method, and show that the nonlinear constraint of Gutshabash and Lipovskii (1994 J. Math. Sci. 68 197–201) is a consequence of the so-called global relation. We also show that this relation implies a stronger constraint on the spectral data, and in particular that no choice of boundary conditions can be associated with a decaying (possibly mod 2π) solution analogous to the pure soliton solutions of the usual, time-dependent sine-Gordon equation. We also briefly indicate how, in contrast to the evolutionary case, the elliptic sine-Gordon equation posed in the half plane does not admit linearisable boundary conditions.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Similarities between the anatomies of living organisms are often used to draw conclusions regarding the ecology and behaviour of extinct animals. Several pterosaur taxa are postulated to have been skim-feeders based largely on supposed convergences of their jaw anatomy with that of the modern skimming bird, Rynchops spp. Using physical and mathematical models of Rynchops bills and pterosaur jaws, we show that skimming is considerably more energetically costly than previously thought for Rynchops and that pterosaurs weighing more than one kilogram would not have been able to skim at all. Furthermore, anatomical comparisons between the highly specialised skull of Rynchops and those of postulated skimming pterosaurs suggest that even smaller forms were poorly adapted for skim-feeding. Our results refute the hypothesis that some pterosaurs commonly used skimming as a foraging method and illustrate the pitfalls involved in extrapolating from limited morphological convergence.
Resumo:
Interpretation of ambiguity is consistently associated with anxiety in children, however, the temporal relationship between interpretation and anxiety remains unclear as do the developmental origins of interpretative biases. This study set out to test a model of the development of interpretative biases in a prospective study of 110 children aged 5–9 years of age. Children and their parents were assessed three times, annually, on measures of anxiety and interpretation of ambiguous scenarios (including, for parents, both their own interpretations and their expectations regarding their child). Three models were constructed to assess associations between parent and child anxiety and threat and distress cognitions and expectancies. The three models were all a reasonable fit of the data, and supported conclusions that: (i) children’s threat and distress cognitions were stable over time and were significantly associated with anxiety, (ii) parents’ threat and distress cognitions and expectancies significantly predicted child threat cognitions at some time points, and (iii) parental anxiety significantly predicted parents cognitions, which predicted parental expectancies at some time points. Parental expectancies were also significantly predicted by child cognitions. The findings varied depending on assessment time point and whether threat or distress cognitions were being considered. The findings support the notion that child and parent cognitive processes, in particular parental expectations, may be a useful target in the treatment or prevention of anxiety disorders in children.
Resumo:
The United Nation Intergovernmental Panel on Climate Change (IPCC) makes it clear that climate change is due to human activities and it recognises buildings as a distinct sector among the seven analysed in its 2007 Fourth Assessment Report. Global concerns have escalated regarding carbon emissions and sustainability in the built environment. The built environment is a human-made setting to accommodate human activities, including building and transport, which covers an interdisciplinary field addressing design, construction, operation and management. Specifically, Sustainable Buildings are expected to achieve high performance throughout the life-cycle of siting, design, construction, operation, maintenance and demolition, in the following areas: • energy and resource efficiency; • cost effectiveness; • minimisation of emissions that negatively impact global warming, indoor air quality and acid rain; • minimisation of waste discharges; and • maximisation of fulfilling the requirements of occupants’ health and wellbeing. Professionals in the built environment sector, for example, urban planners, architects, building scientists, engineers, facilities managers, performance assessors and policy makers, will play a significant role in delivering a sustainable built environment. Delivering a sustainable built environment needs an integrated approach and so it is essential for built environment professionals to have interdisciplinary knowledge in building design and management . Building and urban designers need to have a good understanding of the planning, design and management of the buildings in terms of low carbon and energy efficiency. There are a limited number of traditional engineers who know how to design environmental systems (services engineer) in great detail. Yet there is a very large market for technologists with multi-disciplinary skills who are able to identify the need for, envision and manage the deployment of a wide range of sustainable technologies, both passive (architectural) and active (engineering system),, and select the appropriate approach. Employers seek applicants with skills in analysis, decision-making/assessment, computer simulation and project implementation. An integrated approach is expected in practice, which encourages built environment professionals to think ‘out of the box’ and learn to analyse real problems using the most relevant approach, irrespective of discipline. The Design and Management of Sustainable Built Environment book aims to produce readers able to apply fundamental scientific research to solve real-world problems in the general area of sustainability in the built environment. The book contains twenty chapters covering climate change and sustainability, urban design and assessment (planning, travel systems, urban environment), urban management (drainage and waste), buildings (indoor environment, architectural design and renewable energy), simulation techniques (energy and airflow), management (end-user behaviour, facilities and information), assessment (materials and tools), procurement, and cases studies ( BRE Science Park). Chapters one and two present general global issues of climate change and sustainability in the built environment. Chapter one illustrates that applying the concepts of sustainability to the urban environment (buildings, infrastructure, transport) raises some key issues for tackling climate change, resource depletion and energy supply. Buildings, and the way we operate them, play a vital role in tackling global greenhouse gas emissions. Holistic thinking and an integrated approach in delivering a sustainable built environment is highlighted. Chapter two demonstrates the important role that buildings (their services and appliances) and building energy policies play in this area. Substantial investment is required to implement such policies, much of which will earn a good return. Chapters three and four discuss urban planning and transport. Chapter three stresses the importance of using modelling techniques at the early stage for strategic master-planning of a new development and a retrofit programme. A general framework for sustainable urban-scale master planning is introduced. This chapter also addressed the needs for the development of a more holistic and pragmatic view of how the built environment performs, , in order to produce tools to help design for a higher level of sustainability and, in particular, how people plan, design and use it. Chapter four discusses microcirculation, which is an emerging and challenging area which relates to changing travel behaviour in the quest for urban sustainability. The chapter outlines the main drivers for travel behaviour and choices, the workings of the transport system and its interaction with urban land use. It also covers the new approach to managing urban traffic to maximise economic, social and environmental benefits. Chapters five and six present topics related to urban microclimates including thermal and acoustic issues. Chapter five discusses urban microclimates and urban heat island, as well as the interrelationship of urban design (urban forms and textures) with energy consumption and urban thermal comfort. It introduces models that can be used to analyse microclimates for a careful and considered approach for planning sustainable cities. Chapter six discusses urban acoustics, focusing on urban noise evaluation and mitigation. Various prediction and simulation methods for sound propagation in micro-scale urban areas, as well as techniques for large scale urban noise-mapping, are presented. Chapters seven and eight discuss urban drainage and waste management. The growing demand for housing and commercial developments in the 21st century, as well as the environmental pressure caused by climate change, has increased the focus on sustainable urban drainage systems (SUDS). Chapter seven discusses the SUDS concept which is an integrated approach to surface water management. It takes into consideration quality, quantity and amenity aspects to provide a more pleasant habitat for people as well as increasing the biodiversity value of the local environment. Chapter eight discusses the main issues in urban waste management. It points out that population increases, land use pressures, technical and socio-economic influences have become inextricably interwoven and how ensuring a safe means of dealing with humanity’s waste becomes more challenging. Sustainable building design needs to consider healthy indoor environments, minimising energy for heating, cooling and lighting, and maximising the utilisation of renewable energy. Chapter nine considers how people respond to the physical environment and how that is used in the design of indoor environments. It considers environmental components such as thermal, acoustic, visual, air quality and vibration and their interaction and integration. Chapter ten introduces the concept of passive building design and its relevant strategies, including passive solar heating, shading, natural ventilation, daylighting and thermal mass, in order to minimise heating and cooling load as well as energy consumption for artificial lighting. Chapter eleven discusses the growing importance of integrating Renewable Energy Technologies (RETs) into buildings, the range of technologies currently available and what to consider during technology selection processes in order to minimise carbon emissions from burning fossil fuels. The chapter draws to a close by highlighting the issues concerning system design and the need for careful integration and management of RETs once installed; and for home owners and operators to understand the characteristics of the technology in their building. Computer simulation tools play a significant role in sustainable building design because, as the modern built environment design (building and systems) becomes more complex, it requires tools to assist in the design process. Chapter twelve gives an overview of the primary benefits and users of simulation programs, the role of simulation in the construction process and examines the validity and interpretation of simulation results. Chapter thirteen particularly focuses on the Computational Fluid Dynamics (CFD) simulation method used for optimisation and performance assessment of technologies and solutions for sustainable building design and its application through a series of cases studies. People and building performance are intimately linked. A better understanding of occupants’ interaction with the indoor environment is essential to building energy and facilities management. Chapter fourteen focuses on the issue of occupant behaviour; principally, its impact, and the influence of building performance on them. Chapter fifteen explores the discipline of facilities management and the contribution that this emerging profession makes to securing sustainable building performance. The chapter highlights a much greater diversity of opportunities in sustainable building design that extends well into the operational life. Chapter sixteen reviews the concepts of modelling information flows and the use of Building Information Modelling (BIM), describing these techniques and how these aspects of information management can help drive sustainability. An explanation is offered concerning why information management is the key to ‘life-cycle’ thinking in sustainable building and construction. Measurement of building performance and sustainability is a key issue in delivering a sustainable built environment. Chapter seventeen identifies the means by which construction materials can be evaluated with respect to their sustainability. It identifies the key issues that impact the sustainability of construction materials and the methodologies commonly used to assess them. Chapter eighteen focuses on the topics of green building assessment, green building materials, sustainable construction and operation. Commonly-used assessment tools such as BRE Environmental Assessment Method (BREEAM), Leadership in Energy and Environmental Design ( LEED) and others are introduced. Chapter nineteen discusses sustainable procurement which is one of the areas to have naturally emerged from the overall sustainable development agenda. It aims to ensure that current use of resources does not compromise the ability of future generations to meet their own needs. Chapter twenty is a best-practice exemplar - the BRE Innovation Park which features a number of demonstration buildings that have been built to the UK Government’s Code for Sustainable Homes. It showcases the very latest innovative methods of construction, and cutting edge technology for sustainable buildings. In summary, Design and Management of Sustainable Built Environment book is the result of co-operation and dedication of individual chapter authors. We hope readers benefit from gaining a broad interdisciplinary knowledge of design and management in the built environment in the context of sustainability. We believe that the knowledge and insights of our academics and professional colleagues from different institutions and disciplines illuminate a way of delivering sustainable built environment through holistic integrated design and management approaches. Last, but not least, I would like to take this opportunity to thank all the chapter authors for their contribution. I would like to thank David Lim for his assistance in the editorial work and proofreading.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
A method and oligonucleotide compound for inhibiting replication of a nidovirus in virus-infected animal cells are disclosed. The compound (i) has a nuclease-resistant backbone, (ii) is capable of uptake by the infected cells, (iii) contains between 8-25 nucleotide bases, and (iv) has a sequence capable of disrupting base pairing between the transcriptional regulatory sequences in the 5′ leader region of the positive-strand viral genome and negative-strand 3′ subgenomic region. In practicing the method, infected cells are exposed to the compound in an amount effective to inhibit viral replication.
Resumo:
Although interpretation bias has been associated with the development and/or maintenance of childhood anxiety, its origins remain unclear. The present study is the first to examine intergenerational transmission of this bias from parents to their preschool-aged children via the verbal information pathway. A community sample of fifty parent–child pairs was recruited. Parents completed measures of their own trait anxiety and interpretation bias, their child’s anxiety symptoms, and a written story-stem measure, to capture the way parents tell their children stories. Interpretation bias was assessed in preschool-aged children (aged between 2 years 7 months and 5 years 8 months) using an extended story-stem paradigm. Young children’s interpretation bias was not significantly associated with their own anxiety symptoms. Neither was there evidence for a significant association between parent and child interpretation bias. However, parents who reported they would tell their child one or more threatening story endings in the written story-stem task had significantly higher anxiety than those who did not include any threatening story endings. In turn, children whose parents did not include any threatening endings in their written stories had significantly lower threat interpretations on the child story-stem paradigm, compared to those with parents who included at least one threatening story ending. The results suggest that parental verbal information could play a role in the development of interpretation bias in young children.