940 resultados para Packing, transportation and storage


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports on a research project which examined media coverage and audience perceptions of stem cells and stem cell research in Hungary, using focus groups and a media analysis. A background study was also conducted on the Hungarian legal, social and political situation linked to stem cell research, treatment and storage. Our data shows how stem cell research/treatments were framed by the focus group members in terms of medical results/cures and human interest stories – mirroring the dominant frames utilized by the Hungarian press. The spontaneous discourse on stem cells in the groups involved a non-political and non-controversial understanding – also echoing the dominant presentation of the media. Comparing our results with those of a UK study, we found that although there are some similarities, UK and Hungarian focus group participants framed the issue of stem cell research differently in many respects – and these differences often echoed the divergences of the media coverage in the two countries. We conclude by arguing against approaches which attribute only negligible influence to the media – especially in the case of complex scientific topics and when the dominant information source for the public is the media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project studied the frequency and of water contamination at the source, during transportation, and at home to determine the causes of contamination and its impact on the health of children aged 0 to 5 years. The methods used were construction of the infrastructure for three sources of potable water, administration of a questionnaire about socioeconomic status and sanitation behavior, anthropometric measurement of children, and analysis of water and feces. The contamination, first thought to be only a function of rainfall, turned out to be a very complex phenomenon. Water in homes was contaminated (43.4%) with more than 1100 total coliforms/100 ml due to the use of unclean utensils to transport and store water. This socio-economic and cultural problem should be ad- dressed with health education about sanitation, The latrines (found in 43.8% of families) presented a double-edged problem. The extremely high population density reduced the surface area of land per family, which resulted in a severe nutritional deficit (15% of the children) affecting mainly young children, rendering them more susceptible to diarrhea (three episodes/child/year).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Locard exchange principle proposes that a person can not enter or leave an area or come in contact with an object, without an exchange of materials. In the case of scent evidence, the suspect leaves his scent in the location of the crime scene itself or on objects found therein. Human scent evidence collected from a crime scene can be evaluated through the use of specially trained canines to determine an association between the evidence and a suspect. To date, there has been limited research as to the volatile organic compounds (VOCs) which comprise human odor and their usefulness in distinguishing among individuals. For the purposes of this research, human scent is defined as the most abundant volatile organic compounds present in the headspace above collected odor samples. ^ An instrumental method has been created for the analysis of the VOCs present in human scent, and has been utilized for the optimization of materials used for the collection and storage of human scent evidence. This research project has identified the volatile organic compounds present in the headspace above collected scent samples from different individuals and various regions of the body, with the primary focus involving the armpit area and the palms of the hands. Human scent from the armpit area and palms of an individual sampled over time shows lower variation in the relative peak area ratio of the common compounds present than what is seen across a population. A comparison of the compounds present in human odor for an individual over time, and across a population has been conducted and demonstrates that it is possible to instrumentally differentiate individuals based on the volatile organic compounds above collected odor samples. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In certain European countries and the United States of America, canines have been successfully used in human scent identification. There is however, limited scientific knowledge on the composition of human scent and the detection mechanism that produces an alert from canines. This lack of information has resulted in successful legal challenges to human scent evidence in the courts of law. ^ The main objective of this research was to utilize science to validate the current practices of using human scent evidence in criminal cases. The goals of this study were to utilize Headspace Solid Phase Micro Extraction Gas Chromatography Mass Spectrometry (HS-SPME-GC/MS) to determine the optimum collection and storage conditions for human scent samples, to investigate whether the amount of DNA deposited upon contact with an object affects the alerts produced by human scent identification canines, and to create a prototype pseudo human scent which could be used for training purposes. ^ Hand odor samples which were collected on different sorbent materials and exposed to various environmental conditions showed that human scent samples should be stored without prolonged exposure to UVA/UVB light to allow minimal changes to the overall scent profile. Various methods of collecting human scent from objects were also investigated and it was determined that passive collection methods yields ten times more VOCs by mass than active collection methods. ^ Through the use of polymerase chain reaction (PCR) no correlation was found between the amount of DNA that was deposited upon contact with an object and the alerts that were produced by human scent identification canines. Preliminary studies conducted to create a prototype pseudo human scent showed that it is possible to produce fractions of a human scent sample which can be presented to the canines to determine whether specific fractions or the entire sample is needed to produce alerts by the human scent identification canines. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge of crimes that have occurred in hotels has been scares. The authors explore the nature and causes of hotel crimes in a U.S. metropolitan area. Levels of crimes were directly related to size of the hotel, target market of business travelers, access to public transportation, and an unsafe image of the environment surrounding the hotel. Crime prevention programs based on the findings can be developed to protect the safety of guests and property.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern civilization has developed principally through man's harnessing of forces. For centuries man had to rely on wind, water and animal force as principal sources of power. The advent of the industrial revolution, electrification and the development of new technologies led to the application of wood, coal, gas, petroleum, and uranium to fuel new industries, produce goods and means of transportation, and generate the electrical energy which has become such an integral part of our lives. The geometric growth in energy consumption, coupled with the world's unrestricted growth in population, has caused a disproportionate use of these limited natural resources. The resulting energy predicament could have serious consequences within the next half century unless we commit ourselves to the philosophy of effective energy conservation and management. National legislation, along with the initiative of private industry and growing interest in the private sector has played a major role in stimulating the adoption of energy-conserving laws, technologies, measures, and practices. It is a matter of serious concern in the United States, where ninety-five percent of the commercial and industrial facilities which will be standing in the year 2000 - many in need of retrofit - are currently in place. To conserve energy, it is crucial to first understand how a facility consumes energy, how its users' needs are met, and how all internal and external elements interrelate. To this purpose, the major thrust of this report will be to emphasize the need to develop an energy conservation plan that incorporates energy auditing and surveying techniques. Numerous energy-saving measures and practices will be presented ranging from simple no-cost opportunities to capital intensive investments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the advent of peer to peer networks, and more importantly sensor networks, the desire to extract useful information from continuous and unbounded streams of data has become more prominent. For example, in tele-health applications, sensor based data streaming systems are used to continuously and accurately monitor Alzheimer's patients and their surrounding environment. Typically, the requirements of such applications necessitate the cleaning and filtering of continuous, corrupted and incomplete data streams gathered wirelessly in dynamically varying conditions. Yet, existing data stream cleaning and filtering schemes are incapable of capturing the dynamics of the environment while simultaneously suppressing the losses and corruption introduced by uncertain environmental, hardware, and network conditions. Consequently, existing data cleaning and filtering paradigms are being challenged. This dissertation develops novel schemes for cleaning data streams received from a wireless sensor network operating under non-linear and dynamically varying conditions. The study establishes a paradigm for validating spatio-temporal associations among data sources to enhance data cleaning. To simplify the complexity of the validation process, the developed solution maps the requirements of the application on a geometrical space and identifies the potential sensor nodes of interest. Additionally, this dissertation models a wireless sensor network data reduction system by ascertaining that segregating data adaptation and prediction processes will augment the data reduction rates. The schemes presented in this study are evaluated using simulation and information theory concepts. The results demonstrate that dynamic conditions of the environment are better managed when validation is used for data cleaning. They also show that when a fast convergent adaptation process is deployed, data reduction rates are significantly improved. Targeted applications of the developed methodology include machine health monitoring, tele-health, environment and habitat monitoring, intermodal transportation and homeland security.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The origin and modes of transportation and deposition of inorganic sedimentary material of the Black Sea were studied in approximately 60 piston, gravity, and Kasten cores. The investigation showed that the sediment derived from the north and northwest (especially from the Danube) has a low calcite-dolomite ratio and a high quartz-feldspar ratio. Rock fragments are generally not abundant; garnet is the principal heavy mineral and illite is the predominant clay mineral. This sedimentary material differs markedly from that carried by Anatolian rivers, which is characterized by a high calcite-dolomite ratio and a low quartz-feldspar ratio. Rock fragments are abundant; pyroxene is the principal heavy mineral and montmorillonite is the predominant clay mineral. In generel, the clay fraction is large in all sediments (27.6-86.9 percent), and the lateral distributian indicates an increase in clay consent from the coasts toward two centers in the western and eastern Black Sea basin. Illite is the most common clay mineral in the Black Sea sediments. The lateral changes in composition of the clay mineral can easily be traced to the petrologic character of northern (rich in illite) and southern (rich in montmorillonite) source areas. In almost all cores, a rhythmic change of the montmorillonite-illite ratio with depth was observed. These changes may be related to the changing influence of the two provinces during the Holocene and late Pleistocene. Higher montmorillonite content seems to indicate climctic changes, probably stages of glaciation end permafrost in the northern area, at which time the illite supply was diminished to a large extent. The composition of the sand fraction is relatad to the different petrologic and morphologic characteristics of two major source provimces: (1) a northern province (rich in quartz, feldspars, and garnet) characterized by a low elevation, comprising the Danube basin area and the rivers draining the Russian platform; and (2) a southern province (rich in pyroxene and volcanic and metamorphic rocks) in the mountainous region of Anatolia and the Caucasus, characterized by small but extremely erosive rivers. The textural properties (graded bedding) of the deep-sea send layers clearly suggest deposition from turbidity currents. The carbonate content of the contemporary sediments ranges from 5 to 65 percent. It increases from the coast to a maximum in two centers in the western and eastern basin. This pattern reflects the distribution of the <2-µm fraction. The contemporary mud sedimentation is governed by two important factors: (1) the deposition of terrigenous allochthonous material of low carbonate content originating from the surrounding hinterland (northern and southern source areas), and (2) the autochthonous production of large quantities of biogenic calcite by coccolithophores during the last period of about 3,000-4,000 years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We would like to thank EPSRC for a Doctoral Training Grant (G.A.M) and the Erasmus programme for supporting the study visit to Turin (R.W). We would also like to thank Dr. Federico Cesano for SEM/EDX measurements and for fruitful discussion. Dr. Jo Duncan is thanked for his tremendous insight during XRD interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon Capture and Storage (CCS) technologies provide a means to significantly reduce carbon emissions from the existing fleet of fossil-fired plants, and hence can facilitate a gradual transition from conventional to more sustainable sources of electric power. This is especially relevant for coal plants that have a CO2 emission rate that is roughly two times higher than that of natural gas plants. Of the different kinds of CCS technology available, post-combustion amine based CCS is the best developed and hence more suitable for retrofitting an existing coal plant. The high costs from operating CCS could be reduced by enabling flexible operation through amine storage or allowing partial capture of CO2 during high electricity prices. This flexibility is also found to improve the power plant’s ramp capability, enabling it to offset the intermittency of renewable power sources. This thesis proposes a solution to problems associated with two promising technologies for decarbonizing the electric power system: the high costs of the energy penalty of CCS, and the intermittency and non-dispatchability of wind power. It explores the economic and technical feasibility of a hybrid system consisting of a coal plant retrofitted with a post-combustion-amine based CCS system equipped with the option to perform partial capture or amine storage, and a co-located wind farm. A techno-economic assessment of the performance of the hybrid system is carried out both from the perspective of the stakeholders (utility owners, investors, etc.) as well as that of the power system operator.

In order to perform the assessment from the perspective of the facility owners (e.g., electric power utilities, independent power producers), an optimal design and operating strategy of the hybrid system is determined for both the amine storage and partial capture configurations. A linear optimization model is developed to determine the optimal component sizes for the hybrid system and capture rates while meeting constraints on annual average emission targets of CO2, and variability of the combined power output. Results indicate that there are economic benefits of flexible operation relative to conventional CCS, and demonstrate that the hybrid system could operate as an energy storage system: providing an effective pathway for wind power integration as well as a mechanism to mute the variability of intermittent wind power.

In order to assess the performance of the hybrid system from the perspective of the system operator, a modified Unit Commitment/ Economic Dispatch model is built to consider and represent the techno-economic aspects of operation of the hybrid system within a power grid. The hybrid system is found to be effective in helping the power system meet an average CO2 emissions limit equivalent to the CO2 emission rate of a state-of-the-art natural gas plant, and to reduce power system operation costs and number of instances and magnitude of energy and reserve scarcity.