957 resultados para Modeling technique
Resumo:
Several approaches have been introduced in the literature for active noise control (ANC) systems. Since the filtered-x least-mean-square (FxLMS) algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of the FxLMS algorithm, as a first novelty. In many ANC applications, an on-line secondary path modeling method using white noise as a training signal is required to ensure convergence of the system. As a second novelty, this paper proposes a new approach for on-line secondary path modeling on the basis of a new variable-step-size (VSS) LMS algorithm in feed forward ANC systems. The proposed algorithm is designed so that the noise injection is stopped at the optimum point when the modeling accuracy is sufficient. In this approach, a sudden change in the secondary path during operation makes the algorithm reactivate injection of the white noise to re-adjust the secondary path estimate. Comparative simulation results shown in this paper indicate the effectiveness of the proposed approach in reducing both narrow-band and broad-band noise. In addition, the proposed ANC system is robust against sudden changes of the secondary path model.
Resumo:
Embedded siloxane polymer waveguides have shown promising results for use in optical backplanes. They exhibit high temperature stability, low optical absorption, and require common processing techniques. A challenging aspect of this technology is out-of-plane coupling of the waveguides. A multi-software approach to modeling an optical vertical interconnect (via) is proposed. This approach utilizes the beam propagation method to generate varied modal field distribution structures which are then propagated through a via model using the angular spectrum propagation technique. Simulation results show average losses between 2.5 and 4.5 dB for different initial input conditions. Certain configurations show losses of less than 3 dB and it is shown that in an input/output pair of vias, average losses per via may be lower than the targeted 3 dB.
Resumo:
Mode of access: Internet.
Resumo:
The success of contemporary organizations depends on their ability to make appropriate decisions. Making appropriate decisions is inevitably bound to the availability and provision of relevant information. Information systems should be able to provide information in an efficient way. Thus, within information systems development a detailed analysis of information supply and information demands has to prevail. Based on Syperski’s information set and subset-model we will give an epistemological foundation of information modeling in general and show, why conceptual modeling in particular is capable of specifying effective and efficient information systems. Furthermore, we derive conceptual modeling requirements based on our findings. A short example illustrates the usefulness of a conceptual data modeling technique for the specification of information systems.
Resumo:
Supervisory Control and Data Acquisition systems (SCADA) are widely used to control critical infrastructure automatically. Capturing and analyzing packet-level traffic flowing through such a network is an essential requirement for problems such as legacy network mapping and fault detection. Within the framework of captured network traffic, we present a simple modeling technique, which supports the mapping of the SCADA network topology via traffic monitoring. By characterizing atomic network components in terms of their input-output topology and the relationship between their data traffic logs, we show that these modeling primitives have good compositional behaviour, which allows complex networks to be modeled. Finally, the predictions generated by our model are found to be in good agreement with experimentally obtained traffic.
Resumo:
The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.
The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.
The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.
In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.
In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.
The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.
Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.
Resumo:
The Indo-pacific panther grouper (Chromileptes altiveli) is a predatory fish species and popular imported aquarium fish in the United States which has been recently documented residing in western Atlantic waters. To date, the most successful marine invasive species in the Atlantic is the lionfish (Pterois volitans/miles), which, as for the panther grouper, is assumed to have been introduced to the wild through aquarium releases. However, unlike lionfish, the panther grouper is not yet thought to have an established breeding population in the Atlantic. Using a proven modeling technique developed to track the lionfish invasion, presented is the first known estimation of the potential spread of panther grouper in the Atlantic. The employed cellular automaton-based computer model examines the life history of the subject species including fecundity, mortality, and reproductive potential and combines this with habitat preferences and physical oceanic parameters to forecast the distribution and periodicity of spread of this potential new invasive species. Simulations were examined for origination points within one degree of capture locations of panther grouper from the United States Geological Survey Nonindigenous Aquatic Species Database to eliminate introduction location bias, and two detailed case studies were scrutinized. The model indicates three primary locations where settlement is likely given the inputs and limits of the model; Jupiter Florida/Vero Beach, the Cape Hatteras Tropical Limit/Myrtle Beach South Carolina, and Florida Keys/Ten Thousand Islands locations. Of these locations, Jupiter Florida/Vero Beach has the highest settlement rate in the model and is indicated as the area in which the panther grouper is most likely to become established. This insight is valuable if attempts are to be made to halt this potential marine invasive species
Resumo:
The paper is primarily concerned with the modelling of aircraft manufacturing cost. The aim is to establish an integrated life cycle balanced design process through a systems engineering approach to interdisciplinary analysis and control. The cost modelling is achieved using the genetic causal approach that enforces product family categorisation and the subsequent generation of causal relationships between deterministic cost components and their design source. This utilises causal parametric cost drivers and the definition of the physical architecture from the Work Breakdown Structure (WBS) to identify product families. The paper presents applications to the overall aircraft design with a particular focus on the fuselage as a subsystem of the aircraft, including fuselage panels and localised detail, as well as engine nacelles. The higher level application to aircraft requirements and functional analysis is investigated and verified relative to life cycle design issues for the relationship between acquisition cost and Direct Operational Cost (DOC), for a range of both metal and composite subsystems. Maintenance is considered in some detail as an important contributor to DOC and life cycle cost. The lower level application to aircraft physical architecture is investigated and verified for the WBS of an engine nacelle, including a sequential build stage investigation of the materials, fabrication and assembly costs. The studies are then extended by investigating the acquisition cost of aircraft fuselages, including the recurring unit cost and the non-recurring design cost of the airframe sub-system. The systems costing methodology is facilitated by the genetic causal cost modeling technique as the latter is highly generic, interdisciplinary, flexible, multilevel and recursive in nature, and can be applied at the various analysis levels required of systems engineering. Therefore, the main contribution of paper is a methodology for applying systems engineering costing, supported by the genetic causal cost modeling approach, whether at a requirements, functional or physical level.
Resumo:
This paper presents an efficient. modeling technique for the derivation of the dispersion characteristics of novel uniplanar metallodielectric periodic structures. The analysis is based on the method of moments and an interpolation scheme, which significantly accelerates the computations. Triangular basis functions are used that allow for modeling of arbitrary shaped metallic elements. Based on this method, novel uniplanar left-handed (LH) metamaterials are proposed. Variations of the split rectangular-loop element printed on grounded dielectric substrate are demonstrated to possess LH propagation properties. Full-wave dispersion curves are presented. Based on the dual transmission-line concept, we study the distribution of the modal fields And the variation of series capacitance and shunt inductance for all the proposed elements. A verification of the left-handedness is presented by means of full-wave simulation of finite uniplanar arrays using commercial software (HFSS). The cell dimensions are a small fraction of the wavelength (approximately lambda/24) so that the structures can he considered as a homogeneous effective medium. The structures are simple, readily scalable to higher frequencies, and compatible with low-cost fabrication techniques.
Resumo:
Dissertação de mestrado, Biologia Marinha, Faculdade de Ciências e Tecnologia, Univerdade do Algarve, 2015
Resumo:
We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.
Resumo:
Materials are inherently multi-scale in nature consisting of distinct characteristics at various length scales from atoms to bulk material. There are no widely accepted predictive multi-scale modeling techniques that span from atomic level to bulk relating the effects of the structure at the nanometer (10-9 meter) on macro-scale properties. Traditional engineering deals with treating matter as continuous with no internal structure. In contrast to engineers, physicists have dealt with matter in its discrete structure at small length scales to understand fundamental behavior of materials. Multiscale modeling is of great scientific and technical importance as it can aid in designing novel materials that will enable us to tailor properties specific to an application like multi-functional materials. Polymer nanocomposite materials have the potential to provide significant increases in mechanical properties relative to current polymers used for structural applications. The nanoscale reinforcements have the potential to increase the effective interface between the reinforcement and the matrix by orders of magnitude for a given reinforcement volume fraction as relative to traditional micro- or macro-scale reinforcements. To facilitate the development of polymer nanocomposite materials, constitutive relationships must be established that predict the bulk mechanical properties of the materials as a function of the molecular structure. A computational hierarchical multiscale modeling technique is developed to study the bulk-level constitutive behavior of polymeric materials as a function of its molecular chemistry. Various parameters and modeling techniques from computational chemistry to continuum mechanics are utilized for the current modeling method. The cause and effect relationship of the parameters are studied to establish an efficient modeling framework. The proposed methodology is applied to three different polymers and validated using experimental data available in literature.