835 resultados para Multi-layered analysis
Resumo:
Teniendo en cuenta que no hay nada que se escape de la moda 1, y extendiendonos más allá de esta manida discusión sobre intersecciones formales, esta investigación propone la pasarela como un lugar real de mediación entre moda y arquitectura. Asumiendo esta condición, la pasarela encarna nuevos modos de producción apropiándose de su espacio y estructura, y convierténdose en una máquina capaz de generar múltiples y más bien infinitos significados. La moda es sin duda un proyecto creativo, que ha venido utilizando la pasarela como un marco para la reordenación de su narrativa visual, renovándose asi mismo como fenómeno social. Este proyecto de investigación plantea, que contrariamente las tipologías actuales de las pasarelas no nos facilitan la comprensión de una colección – que suele ser el objetivo principal. Presentan en cambio un entorno en el que se acoplan diferentes formatos visuales, -con varias capas-, conviéndolo en una compleja construcción y provocando nunerosas fricciones con el espacio-tiempo-acción durante el proceso de creación de otros territorios. Partiendo de la idea de la pasarela como un sistema, en el que sus numerosas variables pueden producir diversas combinaciones, esta investigación plantea la hipótesis por la cual un nuevo sistema de pasarela se estaría formando enteramente con capas de información. Este escenario nos conduciría a la inmersión final de la moda en los tejidos de la virtualidad. Si bien el debate sobre la relevancia de los desfiles de moda se ha vuelto más evidente hoy en día, esta investigación especula con la posibilidad del pensamiento arquitectónico y como este puede introducir metodologías de análisis en el marco de estos desfiles de moda, proponiendo una lectura de la pasarela como un sistema de procedimientos específicos inherente a los proyectos/procesos de la arquitectura. Este enfoque enlaza ambas prácticas en un territorio común donde el espacio, el diseño, el comportamiento, el movimiento, y los cuerpos son ordenados/organizados en la creación de estas nuevas posibilidades visuales, y donde las interacciones activan la generación de la novedad y los mensajes. PALABRAS CLAVES moda, sistema, virtual, información, arquitectura Considering that there is nothing left untouched by fashion2, and going beyond the already exhausted discussion about formal intersections, this research introduces the catwalk as the real arena of mediation between fashion and architecture. By assuming this condition, the catwalk embodies new modes of production that appropriates its space and turns it into a machine for generating multiple if not infinite meanings. Fashion, as a creative project, has utilized the catwalk as a frame for rearranging its visual narrative and renewing itself as social phenomena. This research disputes, however, that the current typologies of catwalks do not facilitate the understanding of the collection – as its primary goal - but, instead, present an environment composed of multi-layered visual formats, becoming a complex construct that collides space-time-action in the creation of other territories. Departing from the analysis of the catwalk as a system and how its many variables can produce diverse combinations, this research presents the hypothesis that a new system is being formed entirely built out of information. Such scenario indicates fashion´s final immersion into the fabrics of virtuality. While the discussion about the relevance of fashion shows has become more evident today, this research serves as an introductory speculation on how architectural thinking can introduce methodologies of analysis within the framework of the fashion shows, by proposing a reading of the catwalk as a system through specific procedures that are inherent to architectural projects. Such approach intertwines both practices into a common territory where space, design, behaviour, movement, and bodies are organized for the creation of visual possibilities, and where interactions are triggered in the making of novelty and messages. KEYWORDS fashion, system, virtual, information, architectural
Resumo:
La Malinche’s serene face and beautifully dressed figure dominates the first half of the lost sixteenth century manuscript El Lienzo de Tlaxcala, which exists today in the form of a copy made after the original. In this paper I propose an expanded study of these twenty-one representations of La Malinche as they offer insight into the Tlaxcalan’s reverence, respect, and spiritual belief in La Malinche. The Tlaxcalan leaders recognized her influence on both the Spanish and indigenous leaders during the conquest and cleverly designed a painted narrative to reinforce their connection with La Malinche to enhance their position with the Spanish. Through a multi layered study that consists of a detailed account of her biography in contrast to gender roles in Pre-Hispanic America, as well as formal and iconographic analysis of rarely examined images from the Lienzo de Tlaxcala that link La Malinche to the Virgin Mary, and a review of the ethnographic research on religious beliefs among contemporary Tlaxcalans, I will demonstrate that the mutable history of this woman made her the ideal supernatural protagonist for the people of Tlaxcala.
Resumo:
A microwave-based thermal nebulizer (MWTN) has been employed for the first time as on-line preconcentration device in inductively coupled plasma atomic emission spectrometry (ICP-AES). By the appropriate selection of the experimental conditions, the MWTN could be either operated as a conventional thermal nebulizer or as on-line analyte preconcentration and nebulization device. Thus, when operating at microwave power values above 100 W and highly concentrated alcohol solutions, the amount of energy per solvent mass liquid unit (EMR) is high enough to completely evaporate the solvent inside the system and, as a consequence, the analyte is deposited (and then preconcentrated) on the inner walls of the MWTN capillary. When reducing the EMR to the appropriate value (e.g., by reducing the microwave power at a constant sample uptake rate) the retained analyte is swept along by the liquid-gas stream and an analyte-enriched aerosol is generated and next introduced into the plasma cell. Emission signals obtained with the MWTN operating in preconcentration-nebulization mode improved when increasing preconcentration time and sample uptake rate as well as when decreasing the nozzle inner diameter. When running with pure ethanol solution at its optimum experimental conditions, the MWTN in preconcentration-nebulization mode afforded limits of detection up to one order of magnitude lowers than those obtained operating the MWTN exclusively as a nebulizer. To validate the method, the multi-element analysis (i.e. Al, Ca, Cd, Cr, Cu, Fe, K, Mg, Mn, Na, Pb and Zn) of different commercial spirit samples in ICP-AES has been performed. Analyte recoveries for all the elements studied ranged between 93% and 107% and the dynamic linear range covered up to 4 orders of magnitude (i.e. from 0.1 to 1000 μg L−1). In these analysis, both MWTN operating modes afforded similar results. Nevertheless, the preconcentration-nebulization mode permits to determine a higher number of analytes due to its higher detection capabilities.
Resumo:
Slag composition determines the physical and chemical properties as well as the application performance of molten oxide mixtures. Therefore, it is necessary to establish a routine instrumental technique to produce accurate and precise analytical results for better process and production control. In the present paper, a multi-component analysis technique of powdered metallurgical slag samples by X-ray Fluorescence Spectrometer (XRFS) has been demonstrated. This technique provides rapid and accurate results, with minimum sample preparation. It eliminates the requirement for a fused disc, using briquetted samples protected by a layer of Borax(R). While the use of theoretical alpha coefficients has allowed accurate calibrations to be made using fewer standard samples, the application of pseudo-Voight function to curve fitting makes it possible to resolve overlapped peaks in X-ray spectra that cannot be physically separated. The analytical results of both certified reference materials and industrial slag samples measured using the present technique are comparable to those of the same samples obtained by conventional fused disc measurements.
Resumo:
QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulae for obtaining MLEs via the expectation maximization (EM) algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.
Resumo:
The present investigation aimed to critically examine the factor structure and psychometric properties of the Anxiety Sensitivity Index - Revised (ASI-R). Confirmatory factor analysis using a clinical sample of adults (N = 248) revealed that the ASI-R could be improved substantially through the removal of 15 problematic items in order to account for the most robust dimensions of anxiety sensitivity. This modified scale was renamed the 21-item Anxiety Sensitivity Index (21-item ASI) and reanalyzed with a large sample of normative adults (N = 435), revealing configural and metric invariance across groups. Further comparisons with other alternative models, using multi-sample analysis, indicated the 21-item ASI to be the best fitting model for both groups. There was also evidence of internal consistency, test-retest reliability, and construct validity for both samples suggesting that the 21-item ASI is a useful assessment device for investigating the construct of anxiety sensitivity in both clinical and normative populations.
Resumo:
Flow control in Computer Communication systems is generally a multi-layered structure, consisting of several mechanisms operating independently at different levels. Evaluation of the performance of networks in which different flow control mechanisms act simultaneously is an important area of research, and is examined in depth in this thesis. This thesis presents the modelling of a finite resource computer communication network equipped with three levels of flow control, based on closed queueing network theory. The flow control mechanisms considered are: end-to-end control of virtual circuits, network access control of external messages at the entry nodes and the hop level control between nodes. The model is solved by a heuristic technique, based on an equivalent reduced network and the heuristic extensions to the mean value analysis algorithm. The method has significant computational advantages, and overcomes the limitations of the exact methods. It can be used to solve large network models with finite buffers and many virtual circuits. The model and its heuristic solution are validated by simulation. The interaction between the three levels of flow control are investigated. A queueing model is developed for the admission delay on virtual circuits with end-to-end control, in which messages arrive from independent Poisson sources. The selection of optimum window limit is considered. Several advanced network access schemes are postulated to improve the network performance as well as that of selected traffic streams, and numerical results are presented. A model for the dynamic control of input traffic is developed. Based on Markov decision theory, an optimal control policy is formulated. Numerical results are given and throughput-delay performance is shown to be better with dynamic control than with static control.
Resumo:
The present work describes the development of a proton induced X-ray emission (PIXE) analysis system, especially designed and builtfor routine quantitative multi-elemental analysis of a large number of samples. The historical and general developments of the analytical technique and the physical processes involved are discussed. The philosophy, design, constructional details and evaluation of a versatile vacuum chamber, an automatic multi-sample changer, an on-demand beam pulsing system and ion beam current monitoring facility are described.The system calibration using thin standard foils of Si, P, S,Cl, K, Ca, Ti, V, Fe, Cu, Ga, Ge, Rb, Y and Mo was undertaken at proton beam energies of 1 to 3 MeV in steps of 0.5 MeV energy and compared with theoretical calculations. An independent calibration check using bovine liver Standard Reference Material was performed. The minimum detectable limits have been experimentally determined at detector positions of 90° and 135° with respect to the incident beam for the above range of proton energies as a function of atomic number Z. The system has detection limits of typically 10- 7 to 10- 9 g for elements 14
Resumo:
The elemental analysis of soil is useful in forensic and environmental sciences. Methods were developed and optimized for two laser-based multi-element analysis techniques: laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) and laser-induced breakdown spectroscopy (LIBS). This work represents the first use of a 266 nm laser for forensic soil analysis by LIBS. Sample preparation methods were developed and optimized for a variety of sample types, including pellets for large bulk soil specimens (470 mg) and sediment-laden filters (47 mg), and tape-mounting for small transfer evidence specimens (10 mg). Analytical performance for sediment filter pellets and tape-mounted soils was similar to that achieved with bulk pellets. An inter-laboratory comparison exercise was designed to evaluate the performance of the LA-ICP-MS and LIBS methods, as well as for micro X-ray fluorescence (μXRF), across multiple laboratories. Limits of detection (LODs) were 0.01-23 ppm for LA-ICP-MS, 0.25-574 ppm for LIBS, 16-4400 ppm for μXRF, and well below the levels normally seen in soils. Good intra-laboratory precision (≤ 6 % relative standard deviation (RSD) for LA-ICP-MS; ≤ 8 % for μXRF; ≤ 17 % for LIBS) and inter-laboratory precision (≤ 19 % for LA-ICP-MS; ≤ 25 % for μXRF) were achieved for most elements, which is encouraging for a first inter-laboratory exercise. While LIBS generally has higher LODs and RSDs than LA-ICP-MS, both were capable of generating good quality multi-element data sufficient for discrimination purposes. Multivariate methods using principal components analysis (PCA) and linear discriminant analysis (LDA) were developed for discriminations of soils from different sources. Specimens from different sites that were indistinguishable by color alone were discriminated by elemental analysis. Correct classification rates of 94.5 % or better were achieved in a simulated forensic discrimination of three similar sites for both LIBS and LA-ICP-MS. Results for tape-mounted specimens were nearly identical to those achieved with pellets. Methods were tested on soils from USA, Canada and Tanzania. Within-site heterogeneity was site-specific. Elemental differences were greatest for specimens separated by large distances, even within the same lithology. Elemental profiles can be used to discriminate soils from different locations and narrow down locations even when mineralogy is similar.
Resumo:
Terrestrial ecosystems, occupying more than 25% of the Earth's surface, can serve as
`biological valves' in regulating the anthropogenic emissions of atmospheric aerosol
particles and greenhouse gases (GHGs) as responses to their surrounding environments.
While the signicance of quantifying the exchange rates of GHGs and atmospheric
aerosol particles between the terrestrial biosphere and the atmosphere is
hardly questioned in many scientic elds, the progress in improving model predictability,
data interpretation or the combination of the two remains impeded by
the lack of precise framework elucidating their dynamic transport processes over a
wide range of spatiotemporal scales. The diculty in developing prognostic modeling
tools to quantify the source or sink strength of these atmospheric substances
can be further magnied by the fact that the climate system is also sensitive to the
feedback from terrestrial ecosystems forming the so-called `feedback cycle'. Hence,
the emergent need is to reduce uncertainties when assessing this complex and dynamic
feedback cycle that is necessary to support the decisions of mitigation and
adaptation policies associated with human activities (e.g., anthropogenic emission
controls and land use managements) under current and future climate regimes.
With the goal to improve the predictions for the biosphere-atmosphere exchange
of biologically active gases and atmospheric aerosol particles, the main focus of this
dissertation is on revising and up-scaling the biotic and abiotic transport processes
from leaf to canopy scales. The validity of previous modeling studies in determining
iv
the exchange rate of gases and particles is evaluated with detailed descriptions of their
limitations. Mechanistic-based modeling approaches along with empirical studies
across dierent scales are employed to rene the mathematical descriptions of surface
conductance responsible for gas and particle exchanges as commonly adopted by all
operational models. Specically, how variation in horizontal leaf area density within
the vegetated medium, leaf size and leaf microroughness impact the aerodynamic attributes
and thereby the ultrane particle collection eciency at the leaf/branch scale
is explored using wind tunnel experiments with interpretations by a porous media
model and a scaling analysis. A multi-layered and size-resolved second-order closure
model combined with particle
uxes and concentration measurements within and
above a forest is used to explore the particle transport processes within the canopy
sub-layer and the partitioning of particle deposition onto canopy medium and forest
oor. For gases, a modeling framework accounting for the leaf-level boundary layer
eects on the stomatal pathway for gas exchange is proposed and combined with sap
ux measurements in a wind tunnel to assess how leaf-level transpiration varies with
increasing wind speed. How exogenous environmental conditions and endogenous
soil-root-stem-leaf hydraulic and eco-physiological properties impact the above- and
below-ground water dynamics in the soil-plant system and shape plant responses
to droughts is assessed by a porous media model that accommodates the transient
water
ow within the plant vascular system and is coupled with the aforementioned
leaf-level gas exchange model and soil-root interaction model. It should be noted
that tackling all aspects of potential issues causing uncertainties in forecasting the
feedback cycle between terrestrial ecosystem and the climate is unrealistic in a single
dissertation but further research questions and opportunities based on the foundation
derived from this dissertation are also brie
y discussed.
Resumo:
The thesis begins with the classical cooperation and transfers it to the digital world. This work gives a detailed overview of the young fields of research smart city, shareconomy and crowdsourcing and links these fields with entrepreneurship. The core research aim is the finding of connections between the research fields smart city, shareconomy and crowdsourcing and entrepreneurial activities and the specific fields of application, success factors and conditions for entrepreneurs. The thesis consists of seven peer-reviewed publications. Based on primary and secondary data, the existence of entrepreneurial opportunities in the fields of smart city, shareconomy and crowdsourcing could be confirmed. The first part (publications 1-3) of the thesis are literature reviews to secure the fundamental base for further research. This part consists of newly created definitions and an extreme sharpening of the research fields for the near future. In the second part of the thesis (publications 4-7), empirical field work (in-depth interviews with entrepreneurs) and quantitative analyses (fuzzy set/qualitative comparative analysis and binary logistic regression analysis) contribute to the field of research with additional new insights. Summarizing, the insights are multi-layered: theoretical (e.g. new definitions, sharpening of the research field), methodical (e.g. first time application of the fuzzy set/qualitative comparative analysis in the field of crowdfunding) and qualitative (first time application of in-depth interviews with entrepreneurs in the fields of smart city and shareconomy). The global research question could be answered: the link between entrepreneurship and smart city, shareconomy and crowdfunding could be confirmed, concrete fields of application could be identified and further developments could be touched upon. This work strongly contributes to the young fields of research through much-needed basic work, new qualitative approaches, innovative methods and new insights and offers opportunities for discussion, criticism and support for further research.
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.