46 resultados para Full-scale Physical Modelling
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We present a detailed evaluation of the seasonal performance of the Community Multiscale Air Quality (CMAQ) modelling system and the PSU/NCAR meteorological model coupled to a new Numerical Emission Model for Air Quality (MNEQA). The combined system simulates air quality at a fine resolution (3 km as horizontal resolution and 1 h as temporal resolution) in north-eastern Spain, where problems of ozone pollution are frequent. An extensive database compiled over two periods, from May to September 2009 and 2010, is used to evaluate meteorological simulations and chemical outputs. Our results indicate that the model accurately reproduces hourly and 1-h and 8-h maximum ozone surface concentrations measured at the air quality stations, as statistical values fall within the EPA and EU recommendations. However, to further improve forecast accuracy, three simple bias-adjustment techniques mean subtraction (MS), ratio adjustment (RA), and hybrid forecast (HF) based on 10 days of available comparisons are applied. The results show that the MS technique performed better than RA or HF, although all the bias-adjustment techniques significantly reduce the systematic errors in ozone forecasts.
Resumo:
Thermal systems interchanging heat and mass by conduction, convection, radiation (solar and thermal ) occur in many engineering applications like energy storage by solar collectors, window glazing in buildings, refrigeration of plastic moulds, air handling units etc. Often these thermal systems are composed of various elements for example a building with wall, windows, rooms, etc. It would be of particular interest to have a modular thermal system which is formed by connecting different modules for the elements, flexibility to use and change models for individual elements, add or remove elements without changing the entire code. A numerical approach to handle the heat transfer and fluid flow in such systems helps in saving the full scale experiment time, cost and also aids optimisation of parameters of the system. In subsequent sections are presented a short summary of the work done until now on the orientation of the thesis in the field of numerical methods for heat transfer and fluid flow applications, the work in process and the future work.
Resumo:
El propósito de este trabajo es presentar la construcción y aplicación del Questionari de Desarrollo Emocional para Adultos (QDE-A). Se trata de la versión catalana del Cuestionario de Desarrollo Emocional para Adultos (CDE-A). Los instrumentos disponibles para la medición de la competencia emocional son escasos y todos ellos sujetos a criticas centradas fundamentalmente en la falta de un marco teórico claro y de fundamentos empíricos firmes (Pérez, Petrides y Furnham, 2005). El QDE-A, se enmarca en la línea de investigación sobre educación emocional del GROP (Grupo de Investigación en Orientación Psicopedagógica). Se trata de un cuestionario de autoinforme basado en el marco teórico de la educación emocional desarrollado por el GROP (Bisquerra, 2000 y 2007) según el cual la competencia emocional se compone de cinco dimensiones: conciencia emocional, regulación emocional, autonomía emocional, competencias sociales y competencias para la vida y el bienestar. El QDE-A ofrece una puntuación global y otra para cada una de estas dimensiones. En este artículo se expone el proceso de elaboración para llegar a la versión definitiva que en su forma extensa consiste en una escala que dispone de 48 ítems. Los datos se basan en una muestra de 1537 adultos. La fiabilidad medida por el alfa de Cronbach es de 0,92, para la escala completa y superior a 0.70 para cada una de las dimensiones. La correlación entre cada una de las dimensiones y la puntuación total es significativa en todos los casos con un nivel de p<0.01. El QDE-A responde a la necesidad de disponer de un instrumento riguroso, adaptado a la población catalana, que permite evaluar el nivel de competencia emocional en adultos y fundamentar las intervenciones en educación emocional.
Resumo:
This work proposes a fully-digital interface circuit for the measurement of inductive sensors using a low-cost microcontroller (µC) and without any intermediate active circuit. Apart from the µC and the sensor, the circuit just requires an external resistor and a reference inductance so that two RL circuits with a high-pass filter (HPF) topology are formed. The µC appropriately excites such RL circuits in order to measure the discharging time of the voltage across each inductance (i.e. sensing and reference) and then it uses such discharging times to estimate the sensor inductance. Experimental tests using a commercial µC show a non-linearity error (NLE) lower than 0.5%FSS (Full-Scale Span) when measuring inductances from 1 mH to 10 mH, and from 10 mH to 100 mH.
Resumo:
On this instrumental study we intend to analyse the factorial structure of the Screen for Child Anxiety Related Emotional Disorders (SCARED) in a Spanish sample using exploratory and confirmatory factorial analysis. As a second objective we intend to develop a short form of it for rapid screening and, finally, to analyze the reliabilities of both questionnaires. The SCARED was administered to a community sample of 1,508 children aged between 8 and 12 years. The sample was randomly split using half for the exploratory analysis and the other half for the confirmatory study. Furthermore a reduced version of the SCARED was developed using the SchmidLeiman procedure. Exploratory Factor Analysis yielded a four factor structure comprised of Somatic/panic, Generalized anxiety, Separation anxiety and Social phobia factors This structure was confirmed using Confirmatory Factor Analysis. The four factors, the full scale and the short scale showed good reliabilities. The results obtained seem to indicate that the Spanish version of the SCARED has good internal consistency, and along with other recent results, has a structure of four related factors that replicates the dimensions proposed for anxiety disorders by the DSM-IV-TR
Resumo:
The main objective of this paper aims at developing a methodology that takes into account the human factor extracted from the data base used by the recommender systems, and which allow to resolve the specific problems of prediction and recommendation. In this work, we propose to extract the user's human values scale from the data base of the users, to improve their suitability in open environments, such as the recommender systems. For this purpose, the methodology is applied with the data of the user after interacting with the system. The methodology is exemplified with a case study
Resumo:
Digital art interfaces presents cognitiveparadigms that deals with the recognition of the symbols and representations through interaction.What is presented in this paper is anapproximation of the bodily experience in that particular scenario and a new proposal which has the aim to contribute more ideas and criteria in the analysis of the learning process of aparticipant discovering an interactive space or interface. For that I propose a first new approach where metaphorically I tried to extrapolate the stages of the psychology of development stated byJean Piaget in the interface design domain.
Resumo:
We show how certain N-dimensional dynamical systems are able to exploit the full instability capabilities of their fixed points to do Hopf bifurcations and how such a behavior produces complex time evolutions based on the nonlinear combination of the oscillation modes that emerged from these bifurcations. For really different oscillation frequencies, the evolutions describe robust wave form structures, usually periodic, in which selfsimilarity with respect to both the time scale and system dimension is clearly appreciated. For closer frequencies, the evolution signals usually appear irregular but are still based on the repetition of complex wave form structures. The study is developed by considering vector fields with a scalar-valued nonlinear function of a single variable that is a linear combination of the N dynamical variables. In this case, the linear stability analysis can be used to design N-dimensional systems in which the fixed points of a saddle-node pair experience up to N21 Hopf bifurcations with preselected oscillation frequencies. The secondary processes occurring in the phase region where the variety of limit cycles appear may be rather complex and difficult to characterize, but they produce the nonlinear mixing of oscillation modes with relatively generic features
Resumo:
Els bacteris són la forma dominant de vida del planeta: poden sobreviure en medis molt adversos, i en alguns casos poden generar substàncies que quan les ingerim ens són tòxiques. La seva presència en els aliments fa que la microbiologia predictiva sigui un camp imprescindible en la microbiologia dels aliments per garantir la seguretat alimentària. Un cultiu bacterià pot passar per quatre fases de creixement: latència, exponencial, estacionària i de mort. En aquest treball s’ha avançat en la comprensió dels fenòmens intrínsecs a la fase de latència, que és de gran interès en l’àmbit de la microbiologia predictiva. Aquest estudi, realitzat al llarg de quatre anys, s’ha abordat des de la metodologia Individual-based Modelling (IbM) amb el simulador INDISIM (INDividual DIScrete SIMulation), que ha estat millorat per poder fer-ho. INDISIM ha permès estudiar dues causes de la fase de latència de forma separada, i abordar l’estudi del comportament del cultiu des d’una perspectiva mesoscòpica. S’ha vist que la fase de latència ha de ser estudiada com un procés dinàmic, i no definida per un paràmetre. L’estudi de l’evolució de variables com la distribució de propietats individuals entre la població (per exemple, la distribució de masses) o la velocitat de creixement, han permès distingir dues etapes en la fase de latència, inicial i de transició, i aprofundir en la comprensió del que passa a nivell cel•lular. S’han observat experimentalment amb citometria de flux diversos resultats previstos per les simulacions. La coincidència entre simulacions i experiments no és trivial ni casual: el sistema estudiat és un sistema complex, i per tant la coincidència del comportament al llarg del temps de diversos paràmetres interrelacionats és un aval a la metodologia emprada en les simulacions. Es pot afirmar, doncs, que s’ha verificat experimentalment la bondat de la metodologia INDISIM.
Resumo:
In the PhD thesis “Sound Texture Modeling” we deal with statistical modelling or textural sounds like water, wind, rain, etc. For synthesis and classification. Our initial model is based on a wavelet tree signal decomposition and the modeling of the resulting sequence by means of a parametric probabilistic model, that can be situated within the family of models trainable via expectation maximization (hidden Markov tree model ). Our model is able to capture key characteristics of the source textures (water, rain, fire, applause, crowd chatter ), and faithfully reproduces some of the sound classes. In terms of a more general taxonomy of natural events proposed by Graver, we worked on models for natural event classification and segmentation. While the event labels comprise physical interactions between materials that do not have textural propierties in their enterity, those segmentation models can help in identifying textural portions of an audio recording useful for analysis and resynthesis. Following our work on concatenative synthesis of musical instruments, we have developed a pattern-based synthesis system, that allows to sonically explore a database of units by means of their representation in a perceptual feature space. Concatenative syntyhesis with “molecules” built from sparse atomic representations also allows capture low-level correlations in perceptual audio features, while facilitating the manipulation of textural sounds based on their physical and perceptual properties. We have approached the problem of sound texture modelling for synthesis from different directions, namely a low-level signal-theoretic point of view through a wavelet transform, and a more high-level point of view driven by perceptual audio features in the concatenative synthesis setting. The developed framework provides unified approach to the high-quality resynthesis of natural texture sounds. Our research is embedded within the Metaverse 1 European project (2008-2011), where our models are contributting as low level building blocks within a semi-automated soundscape generation system.
Resumo:
This analysis was stimulated by the real data analysis problem of householdexpenditure data. The full dataset contains expenditure data for a sample of 1224 households. The expenditure is broken down at 2 hierarchical levels: 9 major levels (e.g. housing, food, utilities etc.) and 92 minor levels. There are also 5 factors and 5 covariates at the household level. Not surprisingly, there are a small number of zeros at the major level, but many zeros at the minor level. The question is how best to model the zeros. Clearly, models that tryto add a small amount to the zero terms are not appropriate in general as at least some of the zeros are clearly structural, e.g. alcohol/tobacco for households that are teetotal. The key question then is how to build suitable conditional models. For example, is the sub-composition of spendingexcluding alcohol/tobacco similar for teetotal and non-teetotal households?In other words, we are looking for sub-compositional independence. Also, what determines whether a household is teetotal? Can we assume that it is independent of the composition? In general, whether teetotal will clearly depend on the household level variables, so we need to be able to model this dependence. The other tricky question is that with zeros on more than onecomponent, we need to be able to model dependence and independence of zeros on the different components. Lastly, while some zeros are structural, others may not be, for example, for expenditure on durables, it may be chance as to whether a particular household spends money on durableswithin the sample period. This would clearly be distinguishable if we had longitudinal data, but may still be distinguishable by looking at the distribution, on the assumption that random zeros will usually be for situations where any non-zero expenditure is not small.While this analysis is based on around economic data, the ideas carry over tomany other situations, including geological data, where minerals may be missing for structural reasons (similar to alcohol), or missing because they occur only in random regions which may be missed in a sample (similar to the durables)
Resumo:
Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades &-1 to &9) were separated, and each fraction was analysed for its chemical composition.The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø &8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar).To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend.Key words: sediment, geochemistry, grain size, regression, step function
Resumo:
“Magic for a Pixeloscope” is a one hour show conceived to berepresented in a theater scenario that merges mixed and augmented reality (MR/AR) and full-body interaction with classical magic to create new tricks. The show was conceived by an interdisciplinary team composed by a magician, twointeraction designers, a theater director and a stage designer. Themagician uses custom based hardware and software to createnew illusions which are a starting point to explore new languagefor magical expression. In this paper we introduce a conceptualframework used to inform the design of different tricks; weexplore the design and production of some tricks included in theshow and we describe the feedback received on the world premiere and some of the conclusions obtained.
Resumo:
We consider two fundamental properties in the analysis of two-way tables of positive data: the principle of distributional equivalence, one of the cornerstones of correspondence analysis of contingency tables, and the principle of subcompositional coherence, which forms the basis of compositional data analysis. For an analysis to be subcompositionally coherent, it suffices to analyse the ratios of the data values. The usual approach to dimension reduction in compositional data analysis is to perform principal component analysis on the logarithms of ratios, but this method does not obey the principle of distributional equivalence. We show that by introducing weights for the rows and columns, the method achieves this desirable property. This weighted log-ratio analysis is theoretically equivalent to spectral mapping , a multivariate method developed almost 30 years ago for displaying ratio-scale data from biological activity spectra. The close relationship between spectral mapping and correspondence analysis is also explained, as well as their connection with association modelling. The weighted log-ratio methodology is applied here to frequency data in linguistics and to chemical compositional data in archaeology.
Resumo:
This paper presents a differential synthetic apertureradar (SAR) interferometry (DIFSAR) approach for investigatingdeformation phenomena on full-resolution DIFSAR interferograms.In particular, our algorithm extends the capabilityof the small-baseline subset (SBAS) technique that relies onsmall-baseline DIFSAR interferograms only and is mainly focusedon investigating large-scale deformations with spatial resolutionsof about 100 100 m. The proposed technique is implemented byusing two different sets of data generated at low (multilook data)and full (single-look data) spatial resolution, respectively. Theformer is used to identify and estimate, via the conventional SBAStechnique, large spatial scale deformation patterns, topographicerrors in the available digital elevation model, and possibleatmospheric phase artifacts; the latter allows us to detect, onthe full-resolution residual phase components, structures highlycoherent over time (buildings, rocks, lava, structures, etc.), as wellas their height and displacements. In particular, the estimation ofthe temporal evolution of these local deformations is easily implementedby applying the singular value decomposition technique.The proposed algorithm has been tested with data acquired by theEuropean Remote Sensing satellites relative to the Campania area(Italy) and validated by using geodetic measurements.