973 resultados para Time components
Resumo:
Plantaginis Semen is commonly used in traditional medicine to treat edema, hypertension, and diabetes. The commercially available Plantaginis Semen in China mainly comes from three species. To clarify the chemical composition and distinct different species of Plantaginis Semen, we established a metabolite profiling method based on ultra high performance liquid chromatography with electrospray ionization quadrupole time-of-flight tandem mass spectrometry coupled with elevated energy technique. A total of 108 compounds, including phenylethanoid glycosides, flavonoids, guanidine derivatives, terpenoids, organic acids, and fatty acids, were identified from Plantago asiatica L., P. depressa Willd., and P. major L. Results showed significant differences in chemical components among the three species, particularly flavonoids. This study is the first to provide a comprehensive chemical profile of Plantaginis Semen, which could be involved into the quality control, medication guide, and developing new drug of Plantago seeds.
Resumo:
Abstract Short intense pulses of fast neutrons were produced in a two stage laser-driven experiment. Protons were accelerated by means of the Target Normal Sheath Acceleration (TNSA) method using the TITAN facility at the Lawrence Livermore National Laboratory. Neutrons were obtained following interactions of the protons with a secondary lithium fluoride (LiF) target. The properties of the neutron flux were studied using BC-400 plastic scintillation detectors and the neutron time of flight (nTOF) technique. The detector setup and the experimental conditions were simulated with the Geant4 toolkit. The effects of different components of the experimental setup on the nTOF were studied. Preliminary results from a comparison between experimental and simulated nTOF distributions are presented.
Resumo:
R-matrix with time-dependence theory is applied to electron-impact ionisation processes for He in the S-wave model. Cross sections for electron-impact excitation, ionisation and ionisation with excitation for impact energies between 25 and 225 eV are in excellent agreement with benchmark cross sections. Ultra-fast dynamics induced by a scattering event is observed through time-dependent signatures associated with autoionisation from doubly excited states. Further insight into dynamics can be obtained through examination of the spin components of the time-dependent wavefunction.
Resumo:
[EN]An active vision system to perform tracking of moving objects in real time is described. The main goal is to obtain a system integrating off-the-self components. These components includes a stereoscopic robotic-head, as active perception hardware; a DSP based board SDB C80, as massive data processor and image acquisition board; and finally, a Pentium PC running Windows NT that interconnects and manages the whole system. Real-time is achieved taking advantage of the special architecture of DSP. An evaluation of the performance is included.
Resumo:
A structural time series model is one which is set up in terms of components which have a direct interpretation. In this paper, the discussion focuses on the dynamic modeling procedure based on the state space approach (associated to the Kalman filter), in the context of surface water quality monitoring, in order to analyze and evaluate the temporal evolution of the environmental variables, and thus identify trends or possible changes in water quality (change point detection). The approach is applied to environmental time series: time series of surface water quality variables in a river basin. The statistical modeling procedure is applied to monthly values of physico- chemical variables measured in a network of 8 water monitoring sites over a 15-year period (1999-2014) in the River Ave hydrological basin located in the Northwest region of Portugal.
Resumo:
Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced.
Resumo:
Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.
Resumo:
Australian forest industries have a long history of export trade of a wide range of products from woodchips (for paper manufacturing), sandalwood (essential oils, carving and incense) to high value musical instruments, flooring and outdoor furniture. For the high value group, fluctuating environmental conditions brought on by changes in temperature and relative humidity, can lead to performance problems due to consequential swelling, shrinkage and/or distortion of the wood elements. A survey determined the types of value-added products exported, including species and dimensions packaging used and export markets. Data loggers were installed with shipments to monitor temperature and relative humidity conditions. These data were converted to timber equilibrium moisture content values to provide an indication of the environment that the wood elements would be acclimatising to. The results of the initial survey indicated that primary high value wood export products included guitars, flooring, decking and outdoor furniture. The destination markets were mainly located in the northern hemisphere, particularly the United States of America, China, Hong Kong, Europe (including the United Kingdom), Japan, Korea and the Middle East. Other regions importing Australian-made wooden articles were south-east Asia, New Zealand and South Africa. Different timber species have differing rates of swelling and shrinkage, so the types of timber were also recorded during the survey. Results from this work determined that the major species were ash-type eucalypts from south-eastern Australia (commonly referred to in the market as Tasmanian oak), jarrah from Western Australia, spotted gum, hoop pine, white cypress, black butt, brush box and Sydney blue gum from Queensland and New South Wales. The environmental conditions data indicated that microclimates in shipping containers can fluctuate extensively during shipping. Conditions at the time of manufacturing were usually between 10 and 12% equilibrium moisture content, however conditions during shipping could range from 5 (very dry) to 20% (very humid). The packaging systems incorporated were reported to be efficient at protecting the wooden articles from damage during transit. The research highlighted the potential risk for wood components to ‘move’ in response to periods of drier or more humid conditions than those at the time of manufacturing, and the importance of engineering a packaging system that can account for the environmental conditions experienced in shipping containers. Examples of potential dimensional changes in wooden components were calculated based on published unit shrinkage data for key species and the climatic data returned from the logging equipment. The information highlighted the importance of good design to account for possible timber movement during shipping. A timber movement calculator was developed to allow designers to input component species, dimensions, site of manufacture and destination, to see validate their product design.
Resumo:
Variable Data Printing (VDP) has brought new flexibility and dynamism to the printed page. Each printed instance of a specific class of document can now have different degrees of customized content within the document template. This flexibility comes at a cost. If every printed page is potentially different from all others it must be rasterized separately, which is a time-consuming process. Technologies such as PPML (Personalized Print Markup Language) attempt to address this problem by dividing the bitmapped page into components that can be cached at the raster level, thereby speeding up the generation of page instances. A large number of documents are stored in Page Description Languages at a higher level of abstraction than the bitmapped page. Much of this content could be reused within a VDP environment provided that separable document components can be identified and extracted. These components then need to be individually rasterisable so that each high-level component can be related to its low-level (bitmap) equivalent. Unfortunately, the unstructured nature of most Page Description Languages makes it difficult to extract content easily. This paper outlines the problems encountered in extracting component-based content from existing page description formats, such as PostScript, PDF and SVG, and how the differences between the formats affects the ease with which content can be extracted. The techniques are illustrated with reference to a tool called COG Extractor, which extracts content from PDF and SVG and prepares it for reuse.
Resumo:
The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.
Resumo:
Abstract. Currently, thermal energy generation through coal combustion produces ash particles which cause serious environmental problems and which are known as Fly Ash (FA). FA main components are oxides of silicon, aluminum, iron, calcium and magnesium in addition, toxic metals such as arsenic and cobalt. The use of fly ash as a cement replacement material increases long term strength and durability of concrete. In this work, samples were prepared by replacing cement by ground fly ash in 10, 20 and 30% by weight. The characterization of raw materials and microstructure was obtained by Scanning Electron Microscopy (SEM) and X-ray diffraction (XRD). The final results showed that the grinding process significantly improves the mechanical properties of all samples when compared replacing a mortar made with cement by ground fly ash and the reference samples without added fly ash. The beneficial effect of the ground fly ash can increase the use of this product in precast concrete industry
Resumo:
Soil degradation affects more than 52 million ha of land in counties of the European Union. This problem is particularly serious in Mediterranean areas, where the effects of anthropogenic activities (tillage on slopes, deforestation, and pasture production) add to problems caused by prolonged periods of drought and intense and irregular rainfall. Soil microbiota can be used as an indicator of the soil healthy in degraded areas. This is because soil microbiota participates in the cycle elements and in the organic matter decomposition. All this helps to the young plants establishment and in long term protect the soils against the erosion. During dry periods in the Mediterranean areas, the lack of water entering the soil matrix leads to a loss of soil microbiological activity and it turns into a lower soil production capabilities. Under these conditions, the aim of this study was to evaluate the positive effect on soil biological components produced by an hydro absorbent polymer (Terracottem). The aim of the experiment was to evaluate the impact assessment of an hydropolymer (Terracottem) on the soil biological components. An experimental flowerpot layout was established in June 2015 and 12 variants with different amount of Terracottem were applied as follow: i) 3.0 kg.m3 ; ii) 1.5 kg.m3 and; iii) 0 kg.m3. In all the variants were tested the further additives: a) 1% of glucose, b) 50 kg N.ha-1 of Mineral nitrogen, c) 1% of Glucose + 50 kg N.ha-1 of Mineral nitrogen d) control (no additive). According to natural conditions, humidity have been kept at 15% in all the variants. During four weeks, mineral nitrogen leaching and soil respiration have been measured in each flowerplot. Respiration has been quantified four times every time while moistening containers and alkaline soda lime has been used as a sorbent. The amount of CO 2 increase has been measured with the sorbent. Leaching of mineral nitrogen has been quantified by ion exchange resins (IER). IER pouches have been placed on the bottom of each container, and after completion of the experiment mineral nitrogen leaching has been evaluated by distillation and titration method. Results from respiration have shown statistically significant differences between the variants. According to control, soil with polymers have shown significant difference when comparing respiration with independence of the additive used. CO 2 production in the first week has exceeded the sum of the outputs of the following weeks. Mineral nitrogen leaching measurement has shown statistically significant differences. The lowest leaching has been occurred in control variant, while the highest in variant containing only the addition of mineral nitrogen. Research results may conclude that the biological part of the test soil is not limited by a lack of components, the only thing that suppresses its activity is the lack of moisture. After moistening it leads to a rapid growth of soil activity, without causing the nutrients loss. Besides, Terracottem has affected soil activity neither positively nor negatively, but it considers being a suitable tool for reducing the drought impact in arid and semi-arid areas.
Resumo:
When it comes to information sets in real life, often pieces of the whole set may not be available. This problem can find its origin in various reasons, describing therefore different patterns. In the literature, this problem is known as Missing Data. This issue can be fixed in various ways, from not taking into consideration incomplete observations, to guessing what those values originally were, or just ignoring the fact that some values are missing. The methods used to estimate missing data are called Imputation Methods. The work presented in this thesis has two main goals. The first one is to determine whether any kind of interactions exists between Missing Data, Imputation Methods and Supervised Classification algorithms, when they are applied together. For this first problem we consider a scenario in which the databases used are discrete, understanding discrete as that it is assumed that there is no relation between observations. These datasets underwent processes involving different combina- tions of the three components mentioned. The outcome showed that the missing data pattern strongly influences the outcome produced by a classifier. Also, in some of the cases, the complex imputation techniques investigated in the thesis were able to obtain better results than simple ones. The second goal of this work is to propose a new imputation strategy, but this time we constrain the specifications of the previous problem to a special kind of datasets, the multivariate Time Series. We designed new imputation techniques for this particular domain, and combined them with some of the contrasted strategies tested in the pre- vious chapter of this thesis. The time series also were subjected to processes involving missing data and imputation to finally propose an overall better imputation method. In the final chapter of this work, a real-world example is presented, describing a wa- ter quality prediction problem. The databases that characterized this problem had their own original latent values, which provides a real-world benchmark to test the algorithms developed in this thesis.
Resumo:
The ability to use Software Defined Radio (SDR) in the civilian mobile applications will make it possible for the next generation of mobile devices to handle multi-standard personal wireless devices and ubiquitous wireless devices. The original military standard created many beneficial characteristics for SDR, but resulted in a number of disadvantages as well. Many challenges in commercializing SDR are still the subject of interest in the software radio research community. Four main issues that have been already addressed are performance, size, weight, and power. This investigation presents an in-depth study of SDR inter-components communications in terms of total link delay related to the number of components and packet sizes in systems based on Software Communication Architecture (SCA). The study is based on the investigation of the controlled environment platform. Results suggest that the total link delay does not linearly increase with the number of components and the packet sizes. The closed form expression of the delay was modeled using a logistic function in terms of the number of components and packet sizes. The model performed well when the number of components was large. Based upon the mobility applications, energy consumption has become one of the most crucial limitations. SDR will not only provide flexibility of multi-protocol support, but this desirable feature will also bring a choice of mobile protocols. Having such a variety of choices available creates a problem in the selection of the most appropriate protocol to transmit. An investigation in a real-time algorithm to optimize energy efficiency was also performed. Communication energy models were used including switching estimation to develop a waveform selection algorithm. Simulations were performed to validate the concept.
Resumo:
The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system’s EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter’s components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled