915 resultados para Data replication processes
Resumo:
In conventional Finite Element Analysis (FEA) of radial-axial ring rolling (RAR) the motions of all tools are usually defined prior to simulation in the preprocessing step. However, the real process holds up to 8 degrees of freedom (DOF) that are controlled by industrial control systems according to actual sensor values and preselected control strategies. Since the histories of the motions are unknown before the experiment and are dependent on sensor data, the conventional FEA cannot represent the process before experiment. In order to enable the usage of FEA in the process design stage, this approach integrates the industrially applied control algorithms of the real process including all relevant sensors and actuators into the FE model of ring rolling. Additionally, the process design of a novel process 'the axial profiling', in which a profiled roll is used for rolling axially profiled rings, is supported by FEA. Using this approach suitable control strategies can be tested in virtual environment before processing. © 2013 AIP Publishing LLC.
Resumo:
We present Random Partition Kernels, a new class of kernels derived by demonstrating a natural connection between random partitions of objects and kernels between those objects. We show how the construction can be used to create kernels from methods that would not normally be viewed as random partitions, such as Random Forest. To demonstrate the potential of this method, we propose two new kernels, the Random Forest Kernel and the Fast Cluster Kernel, and show that these kernels consistently outperform standard kernels on problems involving real-world datasets. Finally, we show how the form of these kernels lend themselves to a natural approximation that is appropriate for certain big data problems, allowing $O(N)$ inference in methods such as Gaussian Processes, Support Vector Machines and Kernel PCA.
Resumo:
We report the measurements of relative cross sections for single capture (SC), double capture (DC), single ionization (SI), double ionization (DI), and transfer ionization (TI) in collisions of Xe23+ ions with helium atoms in the velocity range of 0.65-1.32 a.u. The relative cross sections show a weak velocity dependence. The cross-section ratio of double-(DE) to single-electron (SE) removal from He, sigma(DE)/sigma(SE), is about 0.45. Single capture is the dominant reaction channel which is followed by transfer ionization, while only very small probabilities are found for pure ionization and double capture. The present experimental data are in satisfactory agreement with the estimations by the extended classical over-barrier (ECB) model..
Resumo:
We have studied the excitation and dissociation processes of the molecule W(CO)(6) in collisions with low kinetic energy (3 keV) protons, monocharged fluorine, and chlorine ions using double charge transfer spectroscopy. By analyzing the kinetic energy loss of the projectile anions, we measured the excitation energy distribution of the produced transient dications W(CO)(6)(2+). By coincidence measurements between the anions and the stable or fragments of W(CO)(6)(2+), we determined the energy distribution for each dissociation channel. Based on the experimental data, the emission of the first CO was tentatively attributed to a nonstatistical direct dissociation process and the emission of the second or more CO ligands was attributed to the statistical dissociation processes. The dissociation energies for the successive breaking of the W-CO bond were estimated using a cascade model. The ratio between charge separation and evaporation (by the loss of CO+ and CO, respectively) channels was estimated to be 6% in the case of Cl+ impact. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3523347]
Resumo:
Using meteorological data and RS dynamic land-use observation data set, the potential land productivity that is limited by solar radiation and temperature is estimated and the impacts of recent LUCC processes on it are analyzed in this paper. The results show that the influence of LUCC processes on potential land productivity change has extensive and unbalanced characteristics. It generally reduces the productivity in South China and increases it in North China, and the overall effect is increasing the total productivity by 26.22 million tons. The farmland reclamation and original farmlands losses are the primary causes that led potential land productivity to change. The reclamation mostly distributed in arable-pasture and arable-forest transitional zones and oasises in northwestern China has made total productivity increase by 83.35 million tons, accounting for 3.50% of the overall output. The losses of original farmlands driven by built-up areas invading and occupying arable land are mostly distributed in the regions which have rapid economic development, e.g. Huang-Huai-Hai plain, Yangtze River delta, Zhujiang delta, central part of Gansu, southeast coastal region, southeast of Sichuan Basin and Urumqi-Shihezi. It has led the total productivity to decrease 57.13 million tons, which is 2.40% of the overall output.
Resumo:
A new approach is proposed to simulate splash erosion on local soil surfaces. Without the effect of wind and other raindrops, the impact of free-falling raindrops was considered as an independent event from the stochastic viewpoint. The erosivity of a single raindrop depending on its kinetic energy was computed by an empirical relationship in which the kinetic energy was expressed as a power function of the equivalent diameter of the raindrop. An empirical linear function combining the kinetic energy and soil shear strength was used to estimate the impacted amount of soil particles by a single raindrop. Considering an ideal local soil surface with size of I m x I m, the expected number of received free-failing raindrops with different diameters per unit time was described by the combination of the raindrop size distribution function and the terminal velocity of raindrops. The total splash amount was seen as the sum of the impact amount by all raindrops in the rainfall event. The total splash amount per unit time was subdivided into three different components, including net splash amount, single impact amount and re-detachment amount. The re-detachment amount was obtained by a spatial geometric probability derived using the Poisson function in which overlapped impacted areas were considered. The net splash amount was defined as the mass of soil particles collected outside the splash dish. It was estimated by another spatial geometric probability in which the average splashed distance related to the median grain size of soil and effects of other impacted soil particles and other free-falling raindrops were considered. Splash experiments in artificial rainfall were carried out to validate the availability and accuracy of the model. Our simulated results suggested that the net splash amount and re-detachment amount were small parts of the total splash amount. Their proportions were 0.15% and 2.6%, respectively. The comparison of simulated data with measured data showed that this model could be applied to simulate the soil-splash process successfully and needed information of the rainfall intensity and original soil properties including initial bulk intensity, water content, median grain size and some empirical constants related to the soil surface shear strength, the raindrop size distribution function and the average splashed distance. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
The Zenisu deep-sea channel originates on the Izu-Ogasawara island arc, and disappears in the Shikoku Basin of the Philippine Sea. The geomorphology, sedimentary processes, and the development of the Zenisu deep-sea channel were investigated on the basis of swath bathymetry, side-scan sonar imagery, submersible observations, and seismic data. The deep-sea channel can be divided into three segments according to the downslope gradient and channel orientation. They are the Zenisu Canyon, the E-W fan channel, and the trough-axis channel. The sediment fill is characterized by turbidite and debrite deposition and blocky-hummocky avalanche deposits on the flanks of the Zenisu Ridge. In the Zenisu Canyon and the Zenisu deep-sea channel, sediment transport by turbidity currents generates sediment waves (dunes) observed during the Shinkai 6500 dive 371. The development of the Zenisu Canyon is controlled by a N-S shear fault, whereas the trough-axis channel is controlled by basin subsidence associated with the Zenisu Ridge. The E-W fan channel was probably affected by the E-W fault and the basement morphology.
Resumo:
Zenisu deep-sea channel originated from a volcanic arc region, Izu-Ogasawara Island Arc, and vanished in the Shikoku Basin of the Philippine Sea. According to the swath bathymetry, the deep-sea channel can be divided into three,segments. They are Zenisu canyon, E-W fan channel and trough-axis channel. A lot of volcanic detritus were deposited in the Zenisu Trough via the deep-sea channel because it originated from volcanic arc settings. On the basis of the swath bathymetry, submersible and seismic reflection data, the deposits are characterized by turbidite and debrite deposits as those in the other major deep-sea channels. Erosion or few sediments were observed in the Zenisu canyon, whereas a lot of turbidites and debrites occurred in the E-W channel and trough axis channel. Cold seep communities, active fault and fluid flow were discovered along the lower slope of the Zenisu Ridge. Vertical sedimentary sequences in the Zenisu Trough consist of the four post-rift sequence units of the Shikoku Basin, among which Units A and B are two turbidite units. The development of Zenisu canyon is controlled by the N-S shear fault, the E-W fan channel is related to the E-W shear fault, and the trough-axis channel is related to the subsidence of central basin.
Resumo:
[ 1] Intraseasonal variability of Indian Ocean sea surface temperature (SST) during boreal winter is investigated by analyzing available data and a suite of solutions to an ocean general circulation model for 1998 - 2004. This period covers the QuikSCAT and Tropical Rainfall Measuring Mission (TRMM) observations. Impacts of the 30 - 90 day and 10 - 30 day atmospheric intraseasonal oscillations (ISOs) are examined separately, with the former dominated by the Madden-Julian Oscillation (MJO) and the latter dominated by convectively coupled Rossby and Kelvin waves. The maximum variation of intraseasonal SST occurs at 10 degrees S - 2 degrees S in the wintertime Intertropical Convergence Zone (ITCZ), where the mixed layer is thin and intraseasonal wind speed reaches its maximum. The observed maximum warming ( cooling) averaged over ( 60 degrees E - 85 degrees E, 10 degrees S - 3 degrees S) is 1.13 degrees C ( - 0.97 degrees C) for the period of interest, with a standard deviation of 0.39 degrees C in winter. This SST change is forced predominantly by the MJO. While the MJO causes a basin-wide cooling ( warming) in the ITCZ region, submonthly ISOs cause a more complex SST structure that propagates southwestward in the western-central basin and southeastward in the eastern ocean. On both the MJO and submonthly timescales, winds are the deterministic factor for the SST variability. Short-wave radiation generally plays a secondary role, and effects of precipitation are negligible. The dominant role of winds results roughly equally from wind speed and stress forcing. Wind speed affects SST by altering turbulent heat fluxes and entrainment cooling. Wind stress affects SST via several local and remote oceanic processes.
Resumo:
Byers, D., Peel, D., Thomas, D. (2007). Habit, aggregation and long memory: Evidence from television audience data. Applied Economics, 39 (3), 321-327. RAE2008
Resumo:
A new approach is proposed for clustering time-series data. The approach can be used to discover groupings of similar object motions that were observed in a video collection. A finite mixture of hidden Markov models (HMMs) is fitted to the motion data using the expectation-maximization (EM) framework. Previous approaches for HMM-based clustering employ a k-means formulation, where each sequence is assigned to only a single HMM. In contrast, the formulation presented in this paper allows each sequence to belong to more than a single HMM with some probability, and the hard decision about the sequence class membership can be deferred until a later time when such a decision is required. Experiments with simulated data demonstrate the benefit of using this EM-based approach when there is more "overlap" in the processes generating the data. Experiments with real data show the promising potential of HMM-based motion clustering in a number of applications.
Resumo:
This thesis describes the optimisation of chemoenzymatic methods in asymmetric synthesis. Modern synthetic organic chemistry has experienced an enormous growth in biocatalytic methodologies; enzymatic transformations and whole cell bioconversions have become generally accepted synthetic tools for asymmetric synthesis. Biocatalysts are exceptional catalysts, combining broad substrate scope with high regio-, enantio- and chemoselectivities enabling the resolution of organic substrates with superb efficiency and selectivity. In this study three biocatalytic applications in enantioselective synthesis were explored and perhaps the most significant outcome of this work is the excellent enantioselectivity achieved through optimisation of reaction conditions improving the synthetic utility of the biotransformations. In the first chapter a summary of literature discussing the stereochemical control of baker’s yeast (Saccharomyces Cerevisae) mediated reduction of ketones by the introduction of sulfur moieties is presented, and sets the work of Chapter 2 in context. The focus of the second chapter was the synthesis and biocatalytic resolution of (±)-trans-2-benzenesulfonyl-3-n-butylcyclopentanone. For the first time the practical limitations of this resolution have been addressed providing synthetically useful quantities of enantiopure synthons for application in the total synthesis of both enantiomers of 4-methyloctanoic acid, the aggregation pheromone of the rhinoceros beetles of the genus Oryctes. The unique aspect of this enantioselective synthesis was the overall regio- and enantioselective introduction of the methyl group to the octanoic acid chain. This work is part of an ongoing research programme in our group focussed on baker’s yeast mediated kinetic resolution of 2-keto sulfones. The third chapter describes hydrolase-catalysed kinetic resolutions leading to a series of 3-aryl alkanoic acids. Hydrolysis of the ethyl esters with a series of hydrolases was undertaken to identify biocatalysts that yield the corresponding acids in highly enantioenriched form. Contrary to literature reports where a complete disappearance of efficiency and, accordingly enantioselection, was described upon kinetic resolution of sterically demanding 3-arylalkanoic acids, the highest reported enantiopurities of these acids was achieved (up to >98% ee) in this study through optimisation of reaction conditions. Steric and electronic effects on the efficiency and enantioselectivity of the biocatalytic transformation were also explored. Furthermore, a novel approach to determine the absolute stereochemistry of the enantiopure 3-aryl alkanoic acids was investigated through combination of co-crystallisation and X-ray diffraction linked with chiral HPLC analysis. The fourth chapter was focused on the development of a biocatalytic protocol for the asymmetric Henry reaction. Efficient kinetic resolution in hydrolase-mediated transesterification of cis- and trans- β-nitrocyclohexanol derivatives was achieved. Combination of a base-catalysed intramolecular Henry reaction coupled with the hydrolase-mediated kinetic resolution with the view to selective acetylation of a single stereoisomer was investigated. While dynamic kinetic resolution in the intramolecular Henry was not achieved, significant progress in each of the individual elements was made and significantly the feasibility of this process has been demonstrated. The final chapter contains the full experimental details, including spectroscopic and analytical data of all compounds synthesised in this project, while details of chiral HPLC analysis are included in the appendix. The data for the crystal structures are contained in the attached CD.
Resumo:
The enculturation of Irish traditional musicians involves informal, non-formal, and sometimes formal learning processes in a number of different settings, including traditional music sessions, workshops, festivals, and classes. Irish traditional musicians also learn directly from family, peers, and mentors and by using various forms of technology. Each experience contributes to the enculturation process in meaningful and complementary ways. The ethnographic research discussed in this dissertation suggests that within Irish traditional music culture, enculturation occurs most effectively when learners experience a multitude of learning practices. A variety of experiences insures that novices receive multiple opportunities for engagement and learning. If a learner finds one learning practice ineffective, there are other avenues of enculturation. This thesis explores the musical enculturation of Irish traditional musicians. It focuses on the process of becoming a musician by drawing on methodologies and theories from ethnomusicology, education, and Irish traditional music studies. Data was gathered through multiple ethnographic methodologies. Fieldwork based on participant-observation was carried out in a variety of learning contexts, including traditional music sessions, festivals, workshops, and weekly classes. Additionally, interviews with twenty accomplished Irish traditional musicians provide diverse narratives and firsthand insight into musical development and enculturation. These and other methodologies are discussed in Chapter 1. The three main chapters of the thesis explore various common learning experiences. Chapter 2 explores how Irish traditional musicians learn during social and musical interactions between peers, mentors, and family members, and focuses on live music-making which occurs in private homes, sessions, and concerts. These informal and non-formal learning experiences primarily take place outside of organizations and institutions. The interview data suggests these learning experiences are perhaps the most pervasive and influential in terms of musical enculturation. Chapter 3 discusses learning experience in more organized settings, such as traditional music classes, workshops, summer schools, and festivals. The role of organizations such as Comhaltas Ceoltóirí Éireann and pipers’ clubs are discussed from the point of view of the learner. Many of the learning experiences explored in this chapter are informal, non-formal, and sometimes formal in nature, depending on the philosophy of the organization, institution, and individual teacher. The interview data and field observations indicate that learning in these contexts is common and plays a significant role in enculturation, particularly for traditional musicians who were born during and after the 1970s. Chapter 4 explores the ways Irish traditional musicians use technology, including written sources, phonography, videography, websites, and emerging technologies, during the enculturation process. Each type of technology presents different educational implications, and traditional musicians use these technologies in diverse ways and some more than others. For this, and other reasons, technology plays a complex role during the process of musical enculturation. Drawing on themes which emerge during Chapter 2, 3, and 4, the final chapter of this dissertation explores overarching patterns of enculturation within Irish traditional music culture. This ethnographic work suggests that longevity of participation and engagement in multiple learning and performance opportunities foster the enculturation of Irish traditional musicians. Through numerous and prolonged participation in music-making, novices become accustomed to and learn musical, social, and cultural behaviours. The final chapter also explores interconnections between learning experiences and also proposes directions for future research.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain
Resumo:
BACKGROUND: Biological processes occur on a vast range of time scales, and many of them occur concurrently. As a result, system-wide measurements of gene expression have the potential to capture many of these processes simultaneously. The challenge however, is to separate these processes and time scales in the data. In many cases the number of processes and their time scales is unknown. This issue is particularly relevant to developmental biologists, who are interested in processes such as growth, segmentation and differentiation, which can all take place simultaneously, but on different time scales. RESULTS: We introduce a flexible and statistically rigorous method for detecting different time scales in time-series gene expression data, by identifying expression patterns that are temporally shifted between replicate datasets. We apply our approach to a Saccharomyces cerevisiae cell-cycle dataset and an Arabidopsis thaliana root developmental dataset. In both datasets our method successfully detects processes operating on several different time scales. Furthermore we show that many of these time scales can be associated with particular biological functions. CONCLUSIONS: The spatiotemporal modules identified by our method suggest the presence of multiple biological processes, acting at distinct time scales in both the Arabidopsis root and yeast. Using similar large-scale expression datasets, the identification of biological processes acting at multiple time scales in many organisms is now possible.