993 resultados para Zero-One Matrices
Resumo:
PhD in Chemical and Biological Engineering
Resumo:
Tese de Doutoramento em Estudos da Criança (área de especialização em Sociologia da Infância).
Resumo:
El propòsit d'aquest TFC és presentar una metodologia lleugera per a aplicar en aquesta anomenada
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data
Resumo:
Miralls deformables més i més grans, amb cada cop més actuadors estan sent utilitzats actualment en aplicacions d'òptica adaptativa. El control dels miralls amb centenars d'actuadors és un tema de gran interès, ja que les tècniques de control clàssiques basades en la seudoinversa de la matriu de control del sistema es tornen massa lentes quan es tracta de matrius de dimensions tan grans. En aquesta tesi doctoral es proposa un mètode per l'acceleració i la paral.lelitzacó dels algoritmes de control d'aquests miralls, a través de l'aplicació d'una tècnica de control basada en la reducció a zero del components més petits de la matriu de control (sparsification), seguida de l'optimització de l'ordenació dels accionadors de comandament atenent d'acord a la forma de la matriu, i finalment de la seva posterior divisió en petits blocs tridiagonals. Aquests blocs són molt més petits i més fàcils de fer servir en els càlculs, el que permet velocitats de càlcul molt superiors per l'eliminació dels components nuls en la matriu de control. A més, aquest enfocament permet la paral.lelització del càlcul, donant una com0onent de velocitat addicional al sistema. Fins i tot sense paral. lelització, s'ha obtingut un augment de gairebé un 40% de la velocitat de convergència dels miralls amb només 37 actuadors, mitjançant la tècnica proposada. Per validar això, s'ha implementat un muntatge experimental nou complet , que inclou un modulador de fase programable per a la generació de turbulència mitjançant pantalles de fase, i s'ha desenvolupat un model complert del bucle de control per investigar el rendiment de l'algorisme proposat. Els resultats, tant en la simulació com experimentalment, mostren l'equivalència total en els valors de desviació després de la compensació dels diferents tipus d'aberracions per als diferents algoritmes utilitzats, encara que el mètode proposat aquí permet una càrrega computacional molt menor. El procediment s'espera que sigui molt exitós quan s'aplica a miralls molt grans.
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’tincludes below detection limits and/or zero values, and since most of the geological dataresponds to lognormal distributions, these “zero data” represent a mathematicalchallenge for the interpretation.We need to start by recognizing that there are zero values in geology. For example theamount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-existswith nepheline. Another common essential zero is a North azimuth, however we canalways change that zero for the value of 360°. These are known as “Essential zeros”, butwhat can we do with “Rounded zeros” that are the result of below the detection limit ofthe equipment?Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimeswe need to differentiate between a sodic and a potassic alteration. Pre-classification intogroups requires a good knowledge of the distribution of the data and the geochemicalcharacteristics of the groups which is not always available. Considering the zero valuesequal to the limit of detection of the used equipment will generate spuriousdistributions, especially in ternary diagrams. Same situation will occur if we replace thezero values by a small amount using non-parametric or parametric techniques(imputation).The method that we are proposing takes into consideration the well known relationshipsbetween some elements. For example, in copper porphyry deposits, there is always agood direct correlation between the copper values and the molybdenum ones, but whilecopper will always be above the limit of detection, many of the molybdenum values willbe “rounded zeros”. So, we will take the lower quartile of the real molybdenum valuesand establish a regression equation with copper, and then we will estimate the“rounded” zero values of molybdenum by their corresponding copper values.The method could be applied to any type of data, provided we establish first theircorrelation dependency.One of the main advantages of this method is that we do not obtain a fixed value for the“rounded zeros”, but one that depends on the value of the other variable.Key words: compositional data analysis, treatment of zeros, essential zeros, roundedzeros, correlation dependency
Resumo:
The subject of this project is about “Energy Dispersive X-Ray Fluorescence ” (EDXRF).This technique can be used for a tremendous variety of elemental analysis applications.It provides one of the simplest, most accurate and most economic analytical methods for thedetermination of the chemical composition of many types of materials.The purposes of this project are:- To give some basic information about Energy Dispersive X-ray Fluorescence.- To perform qualitative and quantitative analysis of different samples (water-dissolutions,powders, oils,..) in order to define the sensitivity and detection limits of the equipment.- To make a comprehensive and easy-to-use manual of the ‘ARL QUANT’X EnergyDispersive X-Ray Fluorescence’ apparatus
Resumo:
Agro-ecosystems have recently experienced dramatic losses of biodiversity due to more intensive production methods. In order to increase species diversity, agri-environment schemes provide subsidies to farmers who devote a fraction of their land to ecological compensation areas (ECA). Several studies have shown that invertebrate biodiversity is actually higher in ECA than in nearby intensively cultivated farmland. It remains poorly understood, however, to what extent ECA also favour vertebrates, such as small mammals and their predators, which would contribute to restore functioning food chains within revitalized agricultural matrices. We studied small mammal populations among eight habitat types - including wildflower areas, a specific ECA in Switzerland - and habitat selection (radiotracking) by the barn owl Tyto alba, one of their principal predators. Our prediction was that habitats with higher abundances of small mammals would be more visited by foraging Barn owls during the period of chicks' provisioning. Small mammal abundance tended to be higher in wildflower areas than in any other habitat type. Barn owls, however, preferred to forage in cereal fields and grassland. They avoided all types of crops other than cereals, as well as wildflower areas, which suggests that they do not select their hunting habitat primarily with respect to prey density. Instead of prey abundance, prey accessibility may play a more crucial role: wildflower areas have a dense vegetation cover, which may impede access to prey for foraging owls. The exploitation of wildflower areas by the owls might be enhanced by creating open foraging corridors within or around wildflower areas. Wildflower areas managed in that way might contribute to restore functioning food chains within agro-ecosystems.
Resumo:
Maize root growth is negatively affected by compacted layers in the surface (e.g. agricultural traffic) and subsoil layers (e.g. claypans). Both kinds of soil mechanical impedances often coexist in maize fields, but the combined effects on root growth have seldom been studied. Soil physical properties and maize root abundance were determined in three different soils of the Rolling Pampa of Argentina, in conventionally-tilled (CT) and zero-tilled (ZT) fields cultivated with maize. In the soil with a light Bt horizon (loamy Typic Argiudoll, Chivilcoy site), induced plough pans were detected in CT plots at a depth of 0-0.12 m through significant increases in bulk density (1.15 to 1.27 Mg m-3) and cone (tip angle of 60 º) penetrometer resistance (7.18 to 9.37 MPa in summer from ZT to CT, respectively). This caused a reduction in maize root abundance of 40-80 % in CT compared to ZT plots below the induced pans. Two of the studied soils had hard-structured Bt horizons (clay pans), but in only one of them (silty clay loam Abruptic Argiudoll, Villa Lía site) the expected penetrometer resistance increases (up to 9 MPa) were observed with depth. In the other clay pan soil (silty clay loam Vertic Argiudoll, Pérez Millán site), penetrometer resistance did not increase with depth but reached 14.5 MPa at 0.075 and 0.2 m depth in CT and ZT plots, respectively. However, maize root abundance was stratified in the first 0.2 m at the Villa Lía and Pérez Millán sites. There, the hard Bt horizons did not represent an absolute but a relative mechanical impedance to maize roots, by the observed root clumping through desiccation cracks.
Resumo:
We present a complete calculation of the structure of liquid 4He confined to a concave nanoscopic wedge, as a function of the opening angle of the walls. This is achieved within a finite-range density functional formalism. The results here presented, restricted to alkali metal substrates, illustrate the change in meniscus shape from rather broad to narrow wedges on weak and strong alkali adsorbers, and we relate this change to the wetting behavior of helium on the corresponding planar substrate. As the wedge angle is varied, we find a sequence of stable states that, in the case of cesium, undergo one filling and one emptying transition at large and small openings, respectively. A computationally unambiguous criterion to determine the contact angle of 4He on cesium is also proposed.
Resumo:
Within local-spin-density functional theory, we have investigated the ¿dissociation¿ of few-electron circular vertical semiconductor double quantum ring artificial molecules at zero magnetic field as a function of interring distance. In a first step, the molecules are constituted by two identical quantum rings. When the rings are quantum mechanically strongly coupled, the electronic states are substantially delocalized, and the addition energy spectra of the artificial molecule resemble those of a single quantum ring in the few-electron limit. When the rings are quantum mechanically weakly coupled, the electronic states in the molecule are substantially localized in one ring or the other, although the rings can be electrostatically coupled. The effect of a slight mismatch introduced in the molecules from nominally identical quantum wells, or from changes in the inner radius of the constituent rings, induces localization by offsetting the energy levels in the quantum rings. This plays a crucial role in the appearance of the addition spectra as a function of coupling strength particularly in the weak coupling limit.
Resumo:
We report variational calculations, in the hypernetted-chain (HNC)-Fermi-HNC scheme, of one-body density matrices and one-particle momentum distributions for 3He-4He mixtures described by a Jastrow correlated wave function. The 4He condensate fractions and the 3He strength poles are examined and compared with the Monte Carlo available results. The agreement has been found to be very satisfactory. Their density dependence is also studied.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Resumo:
We study energy relaxation in thermalized one-dimensional nonlinear arrays of the Fermi-Pasta-Ulam type. The ends of the thermalized systems are placed in contact with a zero-temperature reservoir via damping forces. Harmonic arrays relax by sequential phonon decay into the cold reservoir, the lower-frequency modes relaxing first. The relaxation pathway for purely anharmonic arrays involves the degradation of higher-energy nonlinear modes into lower-energy ones. The lowest-energy modes are absorbed by the cold reservoir, but a small amount of energy is persistently left behind in the array in the form of almost stationary low-frequency localized modes. Arrays with interactions that contain both a harmonic and an anharmonic contribution exhibit behavior that involves the interplay of phonon modes and breather modes. At long times relaxation is extremely slow due to the spontaneous appearance and persistence of energetic high-frequency stationary breathers. Breather behavior is further ascertained by explicitly injecting a localized excitation into the thermalized arrays and observing the relaxation behavior.