994 resultados para Intermediate model
Resumo:
This research focuses on finding a fashion design methodology to reliably translate innovative two-dimensional ideas on paper, via a structural design sculpture, into an intermediate model. The author, both as a fashion designer and a researcher, has witnessed the issues which arise, regarding the loss of some of the initial ideas and distortion during the two-dimensional creative sketch to three-dimensional garment transfer process. Therefore, this research is concerned with fashion designers engaged in transferring a two-dimensional sketch through the method ‘sculptural form giving’. This research method applies the ideal model of conceptual sculpture, in the fashion design process, akin to those used in the disciplines of architecture. These parallel design disciplines share similar processes for realizing design ideas. Moreover, this research investigates and formalizes the processes that utilize the measurable space between the garment and the body, to help transfer garment variation and scale. In summation, this research proposition focuses on helping fashion designers to produce a creative method that helps the designer transfer their imaginative concept through intermediate modeling.
Resumo:
New arguments proving that successive (repeated) measurements have a memory and actually remember each other are presented. The recognition of this peculiarity can change essentially the existing paradigm associated with conventional observation in behavior of different complex systems and lead towards the application of an intermediate model (IM). This IM can provide a very accurate fit of the measured data in terms of the Prony's decomposition. This decomposition, in turn, contains a small set of the fitting parameters relatively to the number of initial data points and allows comparing the measured data in cases where the “best fit” model based on some specific physical principles is absent. As an example, we consider two X-ray diffractometers (defined in paper as A- (“cheap”) and B- (“expensive”) that are used after their proper calibration for the measuring of the same substance (corundum a-Al2O3). The amplitude-frequency response (AFR) obtained in the frame of the Prony's decomposition can be used for comparison of the spectra recorded from (A) and (B) - X-ray diffractometers (XRDs) for calibration and other practical purposes. We prove also that the Fourier decomposition can be adapted to “ideal” experiment without memory while the Prony's decomposition corresponds to real measurement and can be fitted in the frame of the IM in this case. New statistical parameters describing the properties of experimental equipment (irrespective to their internal “filling”) are found. The suggested approach is rather general and can be used for calibration and comparison of different complex dynamical systems in practical purposes.
Resumo:
Second-rank tensor interactions, such as quadrupolar interactions between the spin- 1 deuterium nuclei and the electric field gradients created by chemical bonds, are affected by rapid random molecular motions that modulate the orientation of the molecule with respect to the external magnetic field. In biological and model membrane systems, where a distribution of dynamically averaged anisotropies (quadrupolar splittings, chemical shift anisotropies, etc.) is present and where, in addition, various parts of the sample may undergo a partial magnetic alignment, the numerical analysis of the resulting Nuclear Magnetic Resonance (NMR) spectra is a mathematically ill-posed problem. However, numerical methods (de-Pakeing, Tikhonov regularization) exist that allow for a simultaneous determination of both the anisotropy and orientational distributions. An additional complication arises when relaxation is taken into account. This work presents a method of obtaining the orientation dependence of the relaxation rates that can be used for the analysis of the molecular motions on a broad range of time scales. An arbitrary set of exponential decay rates is described by a three-term truncated Legendre polynomial expansion in the orientation dependence, as appropriate for a second-rank tensor interaction, and a linear approximation to the individual decay rates is made. Thus a severe numerical instability caused by the presence of noise in the experimental data is avoided. At the same time, enough flexibility in the inversion algorithm is retained to achieve a meaningful mapping from raw experimental data to a set of intermediate, model-free
Resumo:
Simulation of large and complex systems, such as computing grids, is a difficult task. Current simulators, despite providing accurate results, are significantly hard to use. They usually demand a strong knowledge of programming, what is not a standard pattern in today's users of grids and high performance computing. The need for computer expertise prevents these users from simulating how the environment will respond to their applications, what may imply in large loss of efficiency, wasting precious computational resources. In this paper we introduce iSPD, iconic Simulator of Parallel and Distributed Systems, which is a simulator where grid models are produced through an iconic interface. We describe the simulator and its intermediate model languages. Results presented here provide an insight in its easy-of-use and accuracy.
Resumo:
Pós-graduação em Genética - IBILCE
Resumo:
In this paper we describe an approach to interface Abstract State Machines (ASM) with Multiway Decision Graphs (MDG) to enable tool support for the formal verification of ASM descriptions. ASM is a specification method for software and hardware providing a powerful means of modeling various kinds of systems. MDGs are decision diagrams based on abstract representation of data and axe used primarily for modeling hardware systems. The notions of ASM and MDG axe hence closely related to each other, making it appealing to link these two concepts. The proposed interface between ASM and MDG uses two steps: first, the ASM model is transformed into a flat, simple transition system as an intermediate model. Second, this intermediate model is transformed into the syntax of the input language of the MDG tool, MDG-HDL. We have successfully applied this transformation scheme on a case study, the Island Tunnel Controller, where we automatically generated the corresponding MDG-HDL models from ASM specifications.
Resumo:
International audience
Resumo:
We describe the use of a murine model to evaluate resistance against subsequent challenge following a primary infection with oncospheres of Echinococcus granulosus. Mice (Kunming strain) were infected with hatched oncospheres of Echinococcus granulosus; 21 days later a second challenge was given by a different route of infection. A primary infection by intraperitoneal (i.p.) injection stimulated 100 and 90.5% protection in terms of reduced cyst numbers against a secondary infection given subcutaneously (s.c.) or intravenously (i.v.) respectively. A primary infection given s.c. followed by i.p. or i.v. challenge resulted in 84.0 and 100% protection, respectively. Intravenous infection followed by i.p. or s.c. challenge resulted in 98.5 and 69.4% protection, respectively. With the i.v. route of infection, almost all resultant cysts were present in the lungs. The data show that a primary infection with oncospheres can induce total or a high degree of protection against a subsequent challenge and confirms that natural (concomitant) immunity can be stimulated in the intermediate host as the result of a primary infection. This may explain the decline in hydatid infection in sheep older than 2 years in hyper-endemic areas such as those found in Xingjiang, China. These older sheep may have been earlier infected and have subsequently self-cured, with the primary infection stimulating an immune response that protects the intermediate host animals from further infection. (C) 2001 Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
Glycopeptide-intermediate resistant Staphylococcus aureus (GISA) are characterized by multiple changes in the cell wall and an altered expression of global virulence regulators. We investigated whether GISA are affected in their infectivity in a rat model of experimental endocarditis. The glycopeptide-susceptible, methicillin-resistant S. aureus M1V2 and its laboratory-derived GISA M1V16 were examined for their ability to (i) adhere to fibrinogen and fibronectin in vitro, (ii) persist in the bloodstream after intravenous inoculation, (iii) colonize aortic vegetations in rats, and (iv) compete for valve colonization by co-inoculation. Both GISA M1V16 and M1V2 adhered similarly to fibrinogen and fibronectin in vitro. In rats, GISA M1V16 was cleared faster from the blood (P < 0.05) and required 100-times more bacteria than parent M1V2 (10(6) versus 10(4)CFU) to infect 90% of vegetations. GISA M1V16 also had 100 to 1000-times lower bacterial densities in vegetations. Moreover, after co-inoculation with GISA M1V16 and M1V2Rif, a rifampin-resistant variant of M1V2 to discriminate them in organ cultures, GISA M1V16 was out-competed by the glycopeptide-susceptible counterpart. Thus, in rats with experimental endocarditis, GISA showed an attenuated virulence, likely due to a faster clearance from the blood and a reduced fitness in cardiac vegetations. The GISA phenotype appeared globally detrimental to infectivity.
Resumo:
La présente thèse se base sur les principes de la théorisation ancrée (Strauss & Corbin, 1998) afin de répondre au manque de documentation concernant les stratégies adoptées par des « agents intermédiaires » pour promouvoir l’utilisation des connaissances issues de la recherche auprès des intervenants en éducation. Le terme « agent intermédiaire » réfère aux personnes qui sont positionnées à l’interface entre les producteurs et les utilisateurs des connaissances scientifiques et qui encouragent et soutiennent les intervenants scolaires dans l’application des connaissances scientifiques dans leur pratique. L’étude s’inscrit dans le cadre d’un projet du ministère de l’Éducation, du Loisir et du Sport du Québec visant à améliorer la réussite scolaire des élèves du secondaire provenant de milieux défavorisés. Des agents intermédiaires de différents niveaux du système éducatif ayant obtenu le mandat de transférer des connaissances issues de la recherche auprès des intervenants scolaires dans les écoles visées par le projet ont été sollicités pour participer à l’étude. Une stratégie d’échantillonnage de type « boule-de-neige » (Biernacki & Waldorf, 1981; Patton, 1990) a été employée afin d’identifier les personnes reconnues par leurs pairs pour la qualité du soutien offert aux intervenants scolaires quant à l’utilisation de la recherche dans leur pratique. Seize entrevues semi-structurées ont été réalisées. L’analyse des données permet de proposer un modèle d’intervention en transfert de connaissances composé de 32 stratégies d’influence, regroupées en 6 composantes d’intervention, soit : relationnelle, cognitive, politique, facilitatrice, évaluative, de même que de soutien et de suivi continu. Les résultats suggèrent que les stratégies d’ordre relationnelle, cognitive et politique sont interdépendantes et permettent d’établir un climat favorable dans lequel les agents peuvent exercer une plus grande influence sur l’appropriation du processus de l’utilisation des connaissances des intervenants scolaire. Ils montrent en outre que la composante de soutien et de suivi continu est importante pour maintenir les changements quant à l’utilisation de la recherche dans la pratique chez les intervenants scolaires. Les implications théoriques qui découlent du modèle, ainsi que les explications des mécanismes impliqués dans les différentes composantes, sont mises en perspective tant avec la documentation scientifique en transfert de connaissances dans les secteurs de la santé et de l’éducation, qu’avec les travaux provenant de disciplines connexes (notamment la psychologie). Enfin, des pistes d’action pour la pratique sont proposées.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Radiative forcing and climate sensitivity have been widely used as concepts to understand climate change. This work performs climate change experiments with an intermediate general circulation model (IGCM) to examine the robustness of the radiative forcing concept for carbon dioxide and solar constant changes. This IGCM has been specifically developed as a computationally fast model, but one that allows an interaction between physical processes and large-scale dynamics; the model allows many long integrations to be performed relatively quickly. It employs a fast and accurate radiative transfer scheme, as well as simple convection and surface schemes, and a slab ocean, to model the effects of climate change mechanisms on the atmospheric temperatures and dynamics with a reasonable degree of complexity. The climatology of the IGCM run at T-21 resolution with 22 levels is compared to European Centre for Medium Range Weather Forecasting Reanalysis data. The response of the model to changes in carbon dioxide and solar output are examined when these changes are applied globally and when constrained geographically (e.g. over land only). The CO2 experiments have a roughly 17% higher climate sensitivity than the solar experiments. It is also found that a forcing at high latitudes causes a 40% higher climate sensitivity than a forcing only applied at low latitudes. It is found that, despite differences in the model feedbacks, climate sensitivity is roughly constant over a range of distributions of CO2 and solar forcings. Hence, in the IGCM at least, the radiative forcing concept is capable of predicting global surface temperature changes to within 30%, for the perturbations described here. It is concluded that radiative forcing remains a useful tool for assessing the natural and anthropogenic impact of climate change mechanisms on surface temperature.