952 resultados para Three models
Resumo:
En aquest estudi, la toxicitat de diversos metalls pesants i l'arsènic va ser analitzada utilitzant diferents models biològics. En la primera part d'aquest treball, el bioassaig de toxicitat Microtox, el qual està basat en la variació de l'emissió lumínica del bacteri luminiscent Vibrio fischeri, va ser utilitzat per establir les corbes dosi-resposta de diferents elements tòxics com el Zn(II), Pb(II), Cu(II), Hg(II), Ag(I), Co(II), Cd(II), Cr(VI), As(V) i As(III) en solucions aquoses. Els experiments es varen portar a terme a pH 6.0 i 7.0 per tal de mostrar que el pH pot influir en la toxicitat final mesurada d'alguns metalls degut als canvis relacionats amb la seva especiació química. Es varen trobar diferents tipus de corbes dosi-resposta depenent del metall analitzat i el pH del medi. En el cas de l'arsènic, l'efecte del pH en la toxicitat de l'arsenat i l'arsenit es va investigar utilitzant l'assaig Microtox en un rang de pHs comprès entre pH 5.0 i 9.0. Els valors d'EC50 determinats per l'As(V) disminueixen, reflectint un augment de la toxicitat, a mesura que el pH de la solució augmenta mentre que, en el cas de l'As(III), els valors d'EC50 quasi bé no varien entre pH 6.0 i 8.0 i només disminueixen a pH 9.0. HAsO42- i H2AsO3- es varen definir com les espècies més tòxiques. Així mateix, una anàlisi estadística va revelar un efecte antagònic entre les espècies químiques d'arsenat que es troben conjuntament a pH 6.0 i 7.0. D'altra banda, els resultats de dos mètodes estadístics per predir la toxicitat i les possibles interaccions entre el Co(II), Cd(II), Cu(II), Zn(II) i Pb(II) en mescles binàries equitòxiques es varen comparar amb la toxicitat observada sobre el bacteri Vibrio fischeri. L'efecte combinat d'aquests metalls va resultar ser antagònic per les mescles de Co(II)-Cd(II), Cd(II)-Zn(II), Cd(II)-Pb(II) i Cu(II)-Pb(II), sinèrgic per Co(II)-Cu(II) i Zn(II)-Pb(II) i additiu en els altres casos, revelant un patró complex de possibles interaccions. L'efecte sinèrgic de la combinació Co(II)-Cu(II) i la forta disminució de la toxicitat del Pb(II) quan es troba en presència de Cd(II) hauria de merèixer més atenció quan s'estableixen les normatives de seguretat ambiental. La sensibilitat de l'assaig Microtox també va ser determinada. Els valors d'EC20, els quals representen la toxicitat llindar mesurable, varen ser determinats per cada element individualment i es va veure que augmenten de la següent manera: Pb(II) < Ag(I) < Hg(II) Cu(II) < Zn(II) < As(V) < Cd(II) Co(II) < As(III) < Cr(VI). Aquests valors es varen comparar amb les concentracions permeses en aigues residuals industrials establertes per la normativa oficial de Catalunya (Espanya). L'assaig Microtox va resultar ser suficientment sensible per detectar els elements assajats respecte a les normes oficials referents al control de la contaminació, excepte en el cas del cadmi, mercuri, arsenat, arsenit i cromat. En la segona part d'aquest treball, com a resultats complementaris dels resultats previs obtinguts utilitzant l'assaig de toxicitat aguda Microtox, els efectes crònics del Cd(II), Cr(VI) i As(V) es varen analitzar sobre la taxa de creixement i la viabilitat en el mateix model biològic. Sorprenentment, aquests productes químics nocius varen resultar ser poc tòxics per aquest bacteri quan es mesura el seu efecte després de temps d'exposició llargs. Tot i això, en el cas del Cr(VI), l'assaig d'inhibició de la viabilitat va resultar ser més sensible que l'assaig de toxicitat aguda Microtox. Així mateix, també va ser possible observar un clar fenomen d'hormesis, especialment en el cas del Cd(II), quan s'utilitza l'assaig d'inhibició de la viabilitat. A més a més, diversos experiments es varen portar a terme per intentar explicar la manca de toxicitat de Cr(VI) mostrada pel bacteri Vibrio fischeri. La resistència mostrada per aquest bacteri podria ser atribuïda a la capacitat d'aquest bacteri de convertir el Cr(VI) a la forma menys tòxica de Cr(III). Es va trobar que aquesta capacitat de reducció depèn de la composició del medi de cultiu, de la concentració inicial de Cr(VI), del temps d'incubació i de la presència d'una font de carboni. En la tercera part d'aquest treball, la línia cel·lular humana HT29 i cultius primaris de cèl·lules sanguínies de Sparus sarba es varen utilitzar in vitro per detectar la toxicitat llindar de metalls mesurant la sobreexpressió de proteines d'estrès. Extractes de fangs precedents de diverses plantes de tractament d'aigues residuals i diferents metalls, individualment o en combinació, es varen analitzar sobre cultius cel·lulars humans per avaluar el seu efecte sobre la taxa de creixement i la capacitat d'induir la síntesi de les proteïnes Hsp72 relacionades amb l'estrès cel·lular. No es varen trobar efectes adversos significatius quan els components s'analitzen individualment. Nogensmenys, quan es troben conjuntament, es produeix un afecte advers sobre tan la taxa de creixement com en l'expressió de proteins d'estrès. D'altra banda, cèl·lules sanguínies procedents de Sparus sarba es varen exposar in vitro a diferents concentracions de cadmi, plom i crom. La proteïna d'estrès HSP70 es va sobreexpressar significativament després de l'exposició a concentracions tan febles com 0.1 M. Sota les nostres condicions de treball, no es va evidenciar una sobreexpressió de metal·lotioneïnes. Nogensmenys, les cèl·lules sanguínies de peix varen resultar ser un model biològic interessant per a ser utilitzat en anàlisis de toxicitat. Ambdós models biològics varen resultar ser molt adequats per a detectar acuradament la toxicitat produïda per metalls. En general, l'avaluació de la toxicitat basada en l'anàlisi de la sobreexpressió de proteïnes d'estrès és més sensible que l'avaluació de la toxicitat realitzada a nivell d'organisme. A partir dels resultats obtinguts, podem concloure que una bateria de bioassaigs és realment necessària per avaluar acuradament la toxicitat de metalls ja que existeixen grans variacions entre els valors de toxicitat obtinguts emprant diferents organismes i molts factors ambientals poden influir i modificar els resultats obtinguts.
Resumo:
This doctoral thesis offers a quantitative and qualitative analysis of the changes in the urban shape and landscape of the Girona Counties between 1979 and 2006. The theoretical part of the research lies within the framework of the dispersed city phenomenon, and is based on the hypothesis of convergence towards a global urban model. The empirical part demonstrates this proposition with a study of 522 zone development plans in the Girona Counties. The results point to the consolidation of the dispersed city phenomenon, as shown by the sudden increase in built-up space, the spread of urban development throughout the territory, and the emergence of a new, increasingly generic landscape comprising three major morphological types: urban extensions, low density residential estates and industrial zones. This reveals shortcomings of planning for urban growth, weakening of the city as a public project, and a certain degradation of the Mediterranean city model.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
Radiation schemes in general circulation models currently make a number of simplifications when accounting for clouds, one of the most important being the removal of horizontal inhomogeneity. A new scheme is presented that attempts to account for the neglected inhomogeneity by using two regions of cloud in each vertical level of the model as opposed to one. One of these regions is used to represent the optically thinner cloud in the level, and the other represents the optically thicker cloud. So, along with the clear-sky region, the scheme has three regions in each model level and is referred to as “Tripleclouds.” In addition, the scheme has the capability to represent arbitrary vertical overlap between the three regions in pairs of adjacent levels. This scheme is implemented in the Edwards–Slingo radiation code and tested on 250 h of data from 12 different days. The data are derived from cloud retrievals using radar, lidar, and a microwave radiometer at Chilbolton, southern United Kingdom. When the data are grouped into periods equivalent in size to general circulation model grid boxes, the shortwave plane-parallel albedo bias is found to be 8%, while the corresponding bias is found to be less than 1% using Tripleclouds. Similar results are found for the longwave biases. Tripleclouds is then compared to a more conventional method of accounting for inhomogeneity that multiplies optical depths by a constant scaling factor, and Tripleclouds is seen to improve on this method both in terms of top-of-atmosphere radiative flux biases and internal heating rates.
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
Land use and land cover changes in the Brazilian Amazon have major implications for regional and global carbon (C) cycling. Cattle pasture represents the largest single use (about 70%) of this once-forested land in most of the region. The main objective of this study was to evaluate the accuracy of the RothC and Century models at estimating soil organic C (SOC) changes under forest-to-pasture conditions in the Brazilian Amazon. We used data from 11 site-specific 'forest to pasture' chronosequences with the Century Ecosystem Model (Century 4.0) and the Rothamsted C Model (RothC 26.3). The models predicted that forest clearance and conversion to well managed pasture would cause an initial decline in soil C stocks (0-20 cm depth), followed in the majority of cases by a slow rise to levels exceeding those under native forest. One exception to this pattern was a chronosequence in Suia-Missu, which is under degraded pasture. In three other chronosequences the recovery of soil C under pasture appeared to be only to about the same level as under the previous forest. Statistical tests were applied to determine levels of agreement between simulated SOC stocks and observed stocks for all the sites within the 11 chronosequences. The models also provided reasonable estimates (coefficient of correlation = 0.8) of the microbial biomass C in the 0-10 cm soil layer for three chronosequences, when compared with available measured data. The Century model adequately predicted the magnitude and the overall trend in delta C-13 for the six chronosequences where measured 813 C data were available. This study gave independent tests of model performance, as no adjustments were made to the models to generate outputs. Our results suggest that modelling techniques can be successfully used for monitoring soil C stocks and changes, allowing both the identification of current patterns in the soil and the projection of future conditions. Results were used and discussed not only to evaluate soil C dynamics but also to indicate soil C sequestration opportunities for the Brazilian Amazon region. Moreover, modelling studies in these 'forest to pasture' systems have important applications, for example, the calculation of CO, emissions from land use change in national greenhouse gas inventories. (0 2007 Elsevier B.V. All rights reserved.
Resumo:
Using a novel numerical method at unprecedented resolution, we demonstrate that structures of small to intermediate scale in rotating, stratified flows are intrinsically three-dimensional. Such flows are characterized by vortices (spinning volumes of fluid), regions of large vorticity gradients, and filamentary structures at all scales. It is found that such structures have predominantly three-dimensional dynamics below a horizontal scale LLR, where LR is the so-called Rossby radius of deformation, equal to the characteristic vertical scale of the fluid H divided by the ratio of the rotational and buoyancy frequencies f/N. The breakdown of two-dimensional dynamics at these scales is attributed to the so-called "tall-column instability" [D. G. Dritschel and M. de la Torre Juárez, J. Fluid. Mech. 328, 129 (1996)], which is active on columnar vortices that are tall after scaling by f/N, or, equivalently, that are narrow compared with LR. Moreover, this instability eventually leads to a simple relationship between typical vertical and horizontal scales: for each vertical wave number (apart from the vertically averaged, barotropic component of the flow) the average horizontal wave number is equal to f/N times the vertical wave number. The practical implication is that three-dimensional modeling is essential to capture the behavior of rotating, stratified fluids. Two-dimensional models are not valid for scales below LR. ©1999 American Institute of Physics.
Resumo:
Although accuracy of digital elevation models (DEMs) can be quantified and measured in different ways, each is influenced by three main factors: terrain character, sampling strategy and interpolation method. These parameters, and their interaction, are discussed. The generation of DEMs from digitised contours is emphasised because this is the major source of DEMs, particularly within member countries of OEEPE. Such DEMs often exhibit unwelcome artifacts, depending on the interpolation method employed. The origin and magnitude of these effects and how they can be reduced to improve the accuracy of the DEMs are also discussed.
Resumo:
Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.
Resumo:
Numerical simulations of magnetic clouds (MCs) propagating through a structured solar wind suggest that MC-associated magnetic flux ropes are highly distorted by inhomogeneities in the ambient medium. In particular, a solar wind configuration of fast wind from high latitudes and slow wind at low latitudes, common at periods close to solar minimum, should distort the cross section of magnetic clouds into concave-outward structures. This phenomenon has been reported in observations of shock front orientations, but not in the body of magnetic clouds. In this study an analytical magnetic cloud model based upon a kinematically distorted flux rope is modified to simulate propagation through a structured medium. This new model is then used to identify specific time series signatures of the resulting concave-outward flux ropes. In situ observations of three well studied magnetic clouds are examined with comparison to the model, but the expected concave-outward signatures are not present. Indeed, the observations are better described by the convex-outward flux rope model. This may be due to a sharp latitudinal transition from fast to slow wind, resulting in a globally concave-outward flux rope, but with convex-outward signatures on a local scale.
Resumo:
Thirty‐three snowpack models of varying complexity and purpose were evaluated across a wide range of hydrometeorological and forest canopy conditions at five Northern Hemisphere locations, for up to two winter snow seasons. Modeled estimates of snow water equivalent (SWE) or depth were compared to observations at forest and open sites at each location. Precipitation phase and duration of above‐freezing air temperatures are shown to be major influences on divergence and convergence of modeled estimates of the subcanopy snowpack. When models are considered collectively at all locations, comparisons with observations show that it is harder to model SWE at forested sites than open sites. There is no universal “best” model for all sites or locations, but comparison of the consistency of individual model performances relative to one another at different sites shows that there is less consistency at forest sites than open sites, and even less consistency between forest and open sites in the same year. A good performance by a model at a forest site is therefore unlikely to mean a good model performance by the same model at an open site (and vice versa). Calibration of models at forest sites provides lower errors than uncalibrated models at three out of four locations. However, benefits of calibration do not translate to subsequent years, and benefits gained by models calibrated for forest snow processes are not translated to open conditions.
Resumo:
Motivation: Intrinsic protein disorder is functionally implicated in numerous biological roles and is, therefore, ubiquitous in proteins from all three kingdoms of life. Determining the disordered regions in proteins presents a challenge for experimental methods and so recently there has been much focus on the development of improved predictive methods. In this article, a novel technique for disorder prediction, called DISOclust, is described, which is based on the analysis of multiple protein fold recognition models. The DISOclust method is rigorously benchmarked against the top.ve methods from the CASP7 experiment. In addition, the optimal consensus of the tested methods is determined and the added value from each method is quantified. Results: The DISOclust method is shown to add the most value to a simple consensus of methods, even in the absence of target sequence homology to known structures. A simple consensus of methods that includes DISOclust can significantly outperform all of the previous individual methods tested.
Resumo:
In this paper, we list some new orthogonal main effects plans for three-level designs for 4, 5 and 6 factors in IS runs and compare them with designs obtained from the existing L-18 orthogonal array. We show that these new designs have better projection properties and can provide better parameter estimates for a range of possible models. Additionally, we study designs in other smaller run-sizes when there are insufficient resources to perform an 18-run experiment. Plans for three-level designs for 4, 5 and 6 factors in 13 to 17 runs axe given. We show that the best designs here are efficient and deserve strong consideration in many practical situations.
Resumo:
The presented study examined the opinion of in-service and prospective chemistry teachers about the importance of usage of molecular and crystal models in secondary-level school practice, and investigated some of the reasons for their (non-) usage. The majority of participants stated that the use of models plays an important role in chemistry education and that they would use them more often if the circumstances were more favourable. Many teachers claimed that three-dimensional (3d) models are still not available in sufficient number at their schools; they also pointed to the lack of available computer facilities during chemistry lessons. The research revealed that, besides the inadequate material circumstances, less than one third of participants are able to use simple (freeware) computer programs for drawing molecular structures and their presentation in virtual space; however both groups of teachers expressed the willingness to improve their knowledge in the subject area. The investigation points to several actions which could be undertaken to improve the current situation.