996 resultados para Uniform Commercial Code
Resumo:
Hydrogen stratification and atmosphere mixing is a very important phenomenon in nuclear reactor containments when severe accidents are studied and simulated. Hydrogen generation, distribution and accumulation in certain parts of containment may pose a great risk to pressure increase induced by hydrogen combustion, and thus, challenge the integrity of NPP containment. The accurate prediction of hydrogen distribution is important with respect to the safety design of a NPP. Modelling methods typically used for containment analyses include both lumped parameter and field codes. The lumped parameter method is universally used in the containment codes, because its versatility, flexibility and simplicity. The lumped parameter method allows fast, full-scale simulations, where different containment geometries with relevant engineering safety features can be modelled. Lumped parameter gas stratification and mixing modelling methods are presented and discussed in this master’s thesis. Experimental research is widely used in containment analyses. The HM-2 experiment related to hydrogen stratification and mixing conducted at the THAI facility in Germany is calculated with the APROS lump parameter containment package and the APROS 6-equation thermal hydraulic model. The main purpose was to study, whether the convection term included in the momentum conservation equation of the 6-equation modelling gives some remarkable advantages compared to the simplified lumped parameter approach. Finally, a simple containment test case (high steam release to a narrow steam generator room inside a large dry containment) was calculated with both APROS models. In this case, the aim was to determine the extreme containment conditions, where the effect of convection term was supposed to be possibly high. Calculation results showed that both the APROS containment and the 6-equation model could model the hydrogen stratification in the THAI test well, if the vertical nodalisation was dense enough. However, in more complicated cases, the numerical diffusion may distort the results. Calculation of light gas stratification could be probably improved by applying the second order discretisation scheme for the modelling of gas flows. If the gas flows are relatively high, the convection term of the momentum equation is necessary to model the pressure differences between the adjacent nodes reasonably.
Resumo:
The interferometer for low resolution portable Fourier Transform middle infrared spectrometer was developed and studied experimentally. The final aim was a concept for a commercial prototype. Because of the portability, the interferometer should be compact sized and insensitive to the external temperature variations and mechanical vibrations. To minimise the size and manufacturing costs, Michelson interferometer based on plane mirrors and porch swing bearing was selected and no dynamic alignment system was applied. The driving motor was a linear voice coil actuator to avoid mechanical contact of the moving parts. The driving capability for low mirror driving velocities required by the photoacoustic detectors was studied. In total, four versions of such an interferometer were built and experimentally studied. The thermal stability during the external temperature variations and the alignment stability over the mirror travel were measured using the modulation depth of the wide diameter laser beam. Method for estimating the mirror tilt angle from the modulation depth was developed to take account the effect from the non-uniform intensity distribution of the laser beam. The spectrometer stability was finally studied also using the infrared radiation. The latest interferometer was assembled for the middle infrared spectrometer with spectral range from 750 cm−1 to 4500 cm−1. The interferometer size was (197 × 95 × 79) mm3 with the beam diameter of 25 mm. The alignment stability as the change of the tilt angle over the mirror travel of 3 mm was 5 μrad, which decreases the modulation depth only about 0.7 percent in infrared at 3000 cm−1. During the temperature raise, the modulation depth at 3000 cm−1 changed about 1 . . . 2 percentage units per Celsius over short term and even less than 0.2 percentage units per Celsius over the total temperature raise of 30 °C. The unapodised spectral resolution was 4 cm−1 limited by the aperture size. The best achieved signal to noise ratio was about 38 000:1 with commercially available DLaTGS detector. Although the vibration sensitivity requires still improving, the interferometer performed, as a whole, very well and could be further developed to conform all the requirements of the portable and stable spectrometer.
Resumo:
Network externalities and two-sided markets in the context of web services and value creation is not very well discussed topic in academic literature. The explosive rise of the Internet users has created a strong base for many successful web services and pushed many firms towards e-business and online service based business models. Thus the subject of this thesis, the role of network externalities in value creating process of the commer-cial web service for two-sided international markets is very current and interesting topic to examine. The objective of this Master’s Thesis is to advance the study of network externalities from the viewpoint of two-sided markets and network effects as well as describe the value creation & value co-creation process in commercial web service business models. The main proposition suggests that the larger network of customers and the bigger number of users the web service is able to attract, the more value and stronger positive net-work externalities the service is able to create for each customer group. The empirical research of this study was implemented for commercial web service, targeted to Russian consumers and Finnish business users. The findings suggest that the size of the network is highly related to the experi-enced value of the customers and the whole value creation process of commercial web targeted for two-sided international markets varies from the value creation for one-sided or pure domestic markets.
Resumo:
The intensive use of pesticides have contaminated the soil and groundwater. The application of herbicides as controlled release formulations may reduce the environmental damage related to their use because it may optimize the efficiency of the active ingredient and reducing thus the recommended dose. The objective of this study was to evaluate the persistence of the herbicide atrazine applied as commercial formulation (COM) and as controlled release formulation (xerogel - XER) in Oxisol. The experimental design used was split-plot randomized-blocks with four replications, in a (2 x 6) + 1 arrangement. The two formulations (COM and XER) were assigned to main plots and different atrazine concentrations (0, 3.200, 3.600, 4.200, 5.400 and 8.000 g atrazine ha-1) were assigned to sub-plots. Persistence was determined by means of dissipation kinetics and bioavailability tests. The methodology of bioassays to assess the atrazine availability is efficient and enables to distinguish the tested formulations. The availability of atrazine XER is higher than the commercial in two different periods: up to 5 days after herbicide application and at the 35th day after application. The XER formulation tends to be more persistent in relation to COM formulation.
Resumo:
Mobility of atrazine in soil has contributed to the detection of levels above the legal limit in surface water and groundwater in Europe and the United States. The use of new formulations can reduce or minimize the impacts caused by the intensive use of this herbicide in Brazil, mainly in regions with higher agricultural intensification. The objective of this study was to compare the leaching of a commercial formulation of atrazine (WG) with a controlled release formulation (xerogel) using bioassay and chromatographic methods of analysis. The experiment was a split plot randomized block design with four replications, in a (2 x 6) + 1 arrangement. The main formulations of atrazine (WG and xerogel) were allocated in the plots, and the herbicide concentrations (0, 3200, 3600, 4200, 5400 and 8000 g ha-1), in the subplots. Leaching was determined comparatively by using bioassays with oat and chromatographic analysis. The results showed a greater concentration of the herbicide in the topsoil (0-4 cm) in the treatment with the xerogel formulation in comparison with the commercial formulation, which contradicts the results obtained with bioassays, probably because the amount of herbicide available for uptake by plants in the xerogel formulation is less than that available in the WG formulation.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
As the rapid development of the society as well as the lifestyle, the generation of commercial waste is getting more complicated to control. The situation of packaging waste and food waste – the main fractions of commercial waste in different countries in Europe and Asia is analyzed in order to evaluate and suggest necessary improvements for the existing waste management system in the city of Hanoi, Vietnam. From all waste generation sources of the city, a total amount of approximately 4000 tons of mixed waste is transported to the composting facility and the disposal site, which emits a huge amount of 1,6Mt of GHG emission to the environment. Recycling activity is taking place spontaneously by the informal pickers, leads to the difficulty in managing the whole system and uncertainty of the overall data. With a relative calculation, resulting in only approximately 0,17Mt CO2 equivalent emission, incinerator is suggested to be the solution of the problem with overloaded landfill and raising energy demand within the inhabitants.
Resumo:
Three horse-derived antivenoms were tested for their ability to neutralize lethal, hemorrhagic, edema-forming, defibrinating and myotoxic activities induced by the venom of Bothrops atrox from Antioquia and Chocó (Colombia). The following antivenoms were used: a) polyvalent (crotaline) antivenom produced by Instituto Clodomiro Picado (Costa Rica), b) monovalent antibothropic antivenom produced by Instituto Nacional de Salud-INS (Bogotá), and c) a new monovalent anti-B. atrox antivenom produced with the venom of B. atrox from Antioquia and Chocó. The three antivenoms neutralized all toxic activities tested albeit with different potencies. The new monovalent anti-B. atrox antivenom showed the highest neutralizing ability against edema-forming and defibrinating effects of B. atrox venom (41 ± 2 and 100 ± 32 µl antivenom/mg venom, respectively), suggesting that it should be useful in the treatment of B. atrox envenomation in Antioquia and Chocó
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
The use of exact coordinates of pebbles and fuel particles of pebble bed reactor modelling becoming possible in Monte Carlo reactor physics calculations is an important development step. This allows exact modelling of pebble bed reactors with realistic pebble beds without the placing of pebbles in regular lattices. In this study the multiplication coefficient of the HTR-10 pebble bed reactor is calculated with the Serpent reactor physics code and, using this multiplication coefficient, the amount of pebbles required for the critical load of the reactor. The multiplication coefficient is calculated using pebble beds produced with the discrete element method and three different material libraries in order to compare the results. The received results are lower than those from measured at the experimental reactor and somewhat lower than those gained with other codes in earlier studies.
Resumo:
Nineteen-channel EEGs were recorded from the scalp surface of 30 healthy subjects (16 males and 14 females, mean age: 34 years, SD: 11.7 years) at rest and under trains of intermittent photic stimulation (IPS) at rates of 5, 10 and 20 Hz. Digitalized data were submitted to spectral analysis with fast fourier transformation providing the basis for the computation of global field power (GFP). For quantification, GFP values in the frequency ranges of 5, 10 and 20 Hz at rest were divided by the corresponding data obtained under IPS. All subjects showed a photic driving effect at each rate of stimulation. GFP data were normally distributed, whereas ratios from photic driving effect data showed no uniform behavior due to high interindividual variability. Suppression of alpha-power after IPS with 10 Hz was observed in about 70% of the volunteers. In contrast, ratios of alpha-power were unequivocal in all subjects: IPS at 20 Hz always led to a suppression of alpha-power. Dividing alpha-GFP with 20-Hz IPS by alpha-GFP at rest (R = alpha-GFP IPS/alpha-GFPrest) thus resulted in ratios lower than 1. We conclude that ratios from GFP data with 20-Hz IPS may provide a suitable paradigm for further investigations.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
The pipeline for macro- and microarray analyses (PMmA) is a set of scripts with a web interface developed to analyze DNA array data generated by array image quantification software. PMmA is designed for use with single- or double-color array data and to work as a pipeline in five classes (data format, normalization, data analysis, clustering, and array maps). It can also be used as a plugin in the BioArray Software Environment, an open-source database for array analysis, or used in a local version of the web service. All scripts in PMmA were developed in the PERL programming language and statistical analysis functions were implemented in the R statistical language. Consequently, our package is a platform-independent software. Our algorithms can correctly select almost 90% of the differentially expressed genes, showing a superior performance compared to other methods of analysis. The pipeline software has been applied to 1536 expressed sequence tags macroarray public data of sugarcane exposed to cold for 3 to 48 h. PMmA identified thirty cold-responsive genes previously unidentified in this public dataset. Fourteen genes were up-regulated, two had a variable expression and the other fourteen were down-regulated in the treatments. These new findings certainly were a consequence of using a superior statistical analysis approach, since the original study did not take into account the dependence of data variability on the average signal intensity of each gene. The web interface, supplementary information, and the package source code are available, free, to non-commercial users at http://ipe.cbmeg.unicamp.br/pub/PMmA.
Resumo:
The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).