950 resultados para High Definition Finite Difference Time Domain
Resumo:
This paper presents preliminary results to determine small displacements of a global positioning system (GPS) antenna fastened to a structure using only one L1 GPS receiver. Vibrations, periodic or not, are common in large structures, such as bridges, footbridges, tall buildings, and towers under dynamic loads. The behavior in time and frequency leads to structural analysis studies. The hypothesis of this article is that any large structure that presents vibrations in the centimeter-to-millimeter range can be monitored by phase measurements of a single L1 receiver with a high data rate, as long as the direction of the displacement is pointing to a particular satellite. Within this scenario, the carrier phase will be modulated by antenna displacement. During a period of a few dozen seconds, the relative displacement to the satellite, the satellite clock, and the atmospheric phase delays can be assumed as a polynomial time function. The residuals from a polynomial adjustment contain the phase modulation owing to small displacements, random noise, receiver clock short time instabilities, and multipath. The results showed that it is possible to detect displacements of centimeters in the phase data of a single satellite and millimeters in the difference between the phases of two satellites. After applying a periodic nonsinusoidal displacement of 10 m to the antenna, it is clearly recovered in the difference of the residuals. The time domain spectrum obtained by the fast Fourier transform (FFT) exhibited a defined peak of the third harmonic much more than the random noise using the proposed third-degree polynomial model. DOI: 10.1061/(ASCE)SU.1943-5428.0000070. (C) 2012 American Society of Civil Engineers.
Resumo:
Proton nuclear magnetic resonance (H-1 NMR) spectroscopy for detection of biochemical changes in biological samples is a successful technique. However, the achieved NMR resolution is not sufficiently high when the analysis is performed with intact cells. To improve spectral resolution, high resolution magic angle spinning (HR-MAS) is used and the broad signals are separated by a T-2 filter based on the CPMG pulse sequence. Additionally, HR-MAS experiments with a T-2 filter are preceded by a water suppression procedure. The goal of this work is to demonstrate that the experimental procedures of water suppression and T-2 or diffusing filters are unnecessary steps when the filter diagonalization method (FDM) is used to process the time domain HR-MAS signals. Manipulation of the FDM results, represented as a tabular list of peak positions, widths, amplitudes and phases, allows the removal of water signals without the disturbing overlapping or nearby signals. Additionally, the FDM can also be used for phase correction and noise suppression, and to discriminate between sharp and broad lines. Results demonstrate the applicability of the FDM post-acquisition processing to obtain high quality HR-MAS spectra of heterogeneous biological materials.
Resumo:
Providing support for multimedia applications on low-power mobile devices remains a significant research challenge. This is primarily due to two reasons: • Portable mobile devices have modest sizes and weights, and therefore inadequate resources, low CPU processing power, reduced display capabilities, limited memory and battery lifetimes as compared to desktop and laptop systems. • On the other hand, multimedia applications tend to have distinctive QoS and processing requirementswhichmake themextremely resource-demanding. This innate conflict introduces key research challenges in the design of multimedia applications and device-level power optimization. Energy efficiency in this kind of platforms can be achieved only via a synergistic hardware and software approach. In fact, while System-on-Chips are more and more programmable thus providing functional flexibility, hardwareonly power reduction techniques cannot maintain consumption under acceptable bounds. It is well understood both in research and industry that system configuration andmanagement cannot be controlled efficiently only relying on low-level firmware and hardware drivers. In fact, at this level there is lack of information about user application activity and consequently about the impact of power management decision on QoS. Even though operating system support and integration is a requirement for effective performance and energy management, more effective and QoSsensitive power management is possible if power awareness and hardware configuration control strategies are tightly integratedwith domain-specificmiddleware services. The main objective of this PhD research has been the exploration and the integration of amiddleware-centric energymanagement with applications and operating-system. We choose to focus on the CPU-memory and the video subsystems, since they are the most power-hungry components of an embedded system. A second main objective has been the definition and implementation of software facilities (like toolkits, API, and run-time engines) in order to improve programmability and performance efficiency of such platforms. Enhancing energy efficiency and programmability ofmodernMulti-Processor System-on-Chips (MPSoCs) Consumer applications are characterized by tight time-to-market constraints and extreme cost sensitivity. The software that runs on modern embedded systems must be high performance, real time, and even more important low power. Although much progress has been made on these problems, much remains to be done. Multi-processor System-on-Chip (MPSoC) are increasingly popular platforms for high performance embedded applications. This leads to interesting challenges in software development since efficient software development is a major issue for MPSoc designers. An important step in deploying applications on multiprocessors is to allocate and schedule concurrent tasks to the processing and communication resources of the platform. The problem of allocating and scheduling precedenceconstrained tasks on processors in a distributed real-time system is NP-hard. There is a clear need for deployment technology that addresses thesemulti processing issues. This problem can be tackled by means of specific middleware which takes care of allocating and scheduling tasks on the different processing elements and which tries also to optimize the power consumption of the entire multiprocessor platform. This dissertation is an attempt to develop insight into efficient, flexible and optimalmethods for allocating and scheduling concurrent applications tomultiprocessor architectures. It is a well-known problem in literature: this kind of optimization problems are very complex even in much simplified variants, therefore most authors propose simplified models and heuristic approaches to solve it in reasonable time. Model simplification is often achieved by abstracting away platform implementation ”details”. As a result, optimization problems become more tractable, even reaching polynomial time complexity. Unfortunately, this approach creates an abstraction gap between the optimization model and the real HW-SW platform. The main issue with heuristic or, more in general, with incomplete search is that they introduce an optimality gap of unknown size. They provide very limited or no information on the distance between the best computed solution and the optimal one. The goal of this work is to address both abstraction and optimality gaps, formulating accurate models which accounts for a number of ”non-idealities” in real-life hardware platforms, developing novel mapping algorithms that deterministically find optimal solutions, and implementing software infrastructures required by developers to deploy applications for the targetMPSoC platforms. Energy Efficient LCDBacklightAutoregulation on Real-LifeMultimediaAp- plication Processor Despite the ever increasing advances in Liquid Crystal Display’s (LCD) technology, their power consumption is still one of the major limitations to the battery life of mobile appliances such as smart phones, portable media players, gaming and navigation devices. There is a clear trend towards the increase of LCD size to exploit the multimedia capabilities of portable devices that can receive and render high definition video and pictures. Multimedia applications running on these devices require LCD screen sizes of 2.2 to 3.5 inches andmore to display video sequences and pictures with the required quality. LCD power consumption is dependent on the backlight and pixel matrix driving circuits and is typically proportional to the panel area. As a result, the contribution is also likely to be considerable in future mobile appliances. To address this issue, companies are proposing low power technologies suitable for mobile applications supporting low power states and image control techniques. On the research side, several power saving schemes and algorithms can be found in literature. Some of them exploit software-only techniques to change the image content to reduce the power associated with the crystal polarization, some others are aimed at decreasing the backlight level while compensating the luminance reduction by compensating the user perceived quality degradation using pixel-by-pixel image processing algorithms. The major limitation of these techniques is that they rely on the CPU to perform pixel-based manipulations and their impact on CPU utilization and power consumption has not been assessed. This PhDdissertation shows an alternative approach that exploits in a smart and efficient way the hardware image processing unit almost integrated in every current multimedia application processors to implement a hardware assisted image compensation that allows dynamic scaling of the backlight with a negligible impact on QoS. The proposed approach overcomes CPU-intensive techniques by saving system power without requiring either a dedicated display technology or hardware modification. Thesis Overview The remainder of the thesis is organized as follows. The first part is focused on enhancing energy efficiency and programmability of modern Multi-Processor System-on-Chips (MPSoCs). Chapter 2 gives an overview about architectural trends in embedded systems, illustrating the principal features of new technologies and the key challenges still open. Chapter 3 presents a QoS-driven methodology for optimal allocation and frequency selection for MPSoCs. The methodology is based on functional simulation and full system power estimation. Chapter 4 targets allocation and scheduling of pipelined stream-oriented applications on top of distributed memory architectures with messaging support. We tackled the complexity of the problem by means of decomposition and no-good generation, and prove the increased computational efficiency of this approach with respect to traditional ones. Chapter 5 presents a cooperative framework to solve the allocation, scheduling and voltage/frequency selection problem to optimality for energyefficient MPSoCs, while in Chapter 6 applications with conditional task graph are taken into account. Finally Chapter 7 proposes a complete framework, called Cellflow, to help programmers in efficient software implementation on a real architecture, the Cell Broadband Engine processor. The second part is focused on energy efficient software techniques for LCD displays. Chapter 8 gives an overview about portable device display technologies, illustrating the principal features of LCD video systems and the key challenges still open. Chapter 9 shows several energy efficient software techniques present in literature, while Chapter 10 illustrates in details our method for saving significant power in an LCD panel. Finally, conclusions are drawn, reporting the main research contributions that have been discussed throughout this dissertation.
Resumo:
The need for high bandwidth, due to the explosion of new multi\-media-oriented IP-based services, as well as increasing broadband access requirements is leading to the need of flexible and highly reconfigurable optical networks. While transmission bandwidth does not represent a limit due to the huge bandwidth provided by optical fibers and Dense Wavelength Division Multiplexing (DWDM) technology, the electronic switching nodes in the core of the network represent the bottleneck in terms of speed and capacity for the overall network. For this reason DWDM technology must be exploited not only for data transport but also for switching operations. In this Ph.D. thesis solutions for photonic packet switches, a flexible alternative with respect to circuit-switched optical networks are proposed. In particular solutions based on devices and components that are expected to mature in the near future are proposed, with the aim to limit the employment of complex components. The work presented here is the result of part of the research activities performed by the Networks Research Group at the Department of Electronics, Computer Science and Systems (DEIS) of the University of Bologna, Italy. In particular, the work on optical packet switching has been carried on within three relevant research projects: the e-Photon/ONe and e-Photon/ONe+ projects, funded by the European Union in the Sixth Framework Programme, and the national project OSATE funded by the Italian Ministry of Education, University and Scientific Research. The rest of the work is organized as follows. Chapter 1 gives a brief introduction to network context and contention resolution in photonic packet switches. Chapter 2 presents different strategies for contention resolution in wavelength domain. Chapter 3 illustrates a possible implementation of one of the schemes proposed in chapter 2. Then, chapter 4 presents multi-fiber switches, which employ jointly wavelength and space domains to solve contention. Chapter 5 shows buffered switches, to solve contention in time domain besides wavelength domain. Finally chapter 6 presents a cost model to compare different switch architectures in terms of cost.
Resumo:
Cost, performance and availability considerations are forcing even the most conservative high-integrity embedded real-time systems industry to migrate from simple hardware processors to ones equipped with caches and other acceleration features. This migration disrupts the practices and solutions that industry had developed and consolidated over the years to perform timing analysis. Industry that are confident with the efficiency/effectiveness of their verification and validation processes for old-generation processors, do not have sufficient insight on the effects of the migration to cache-equipped processors. Caches are perceived as an additional source of complexity, which has potential for shattering the guarantees of cost- and schedule-constrained qualification of their systems. The current industrial approach to timing analysis is ill-equipped to cope with the variability incurred by caches. Conversely, the application of advanced WCET analysis techniques on real-world industrial software, developed without analysability in mind, is hardly feasible. We propose a development approach aimed at minimising the cache jitters, as well as at enabling the application of advanced WCET analysis techniques to industrial systems. Our approach builds on:(i) identification of those software constructs that may impede or complicate timing analysis in industrial-scale systems; (ii) elaboration of practical means, under the model-driven engineering (MDE) paradigm, to enforce the automated generation of software that is analyzable by construction; (iii) implementation of a layout optimisation method to remove cache jitters stemming from the software layout in memory, with the intent of facilitating incremental software development, which is of high strategic interest to industry. The integration of those constituents in a structured approach to timing analysis achieves two interesting properties: the resulting software is analysable from the earliest releases onwards - as opposed to becoming so only when the system is final - and more easily amenable to advanced timing analysis by construction, regardless of the system scale and complexity.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
Cytochrom c Oxidase (CcO), der Komplex IV der Atmungskette, ist eine der Häm-Kupfer enthaltenden Oxidasen und hat eine wichtige Funktion im Zellmetabolismus. Das Enzym enthält vier prosthetische Gruppen und befindet sich in der inneren Membran von Mitochondrien und in der Zellmembran einiger aerober Bakterien. Die CcO katalysiert den Elektronentransfer (ET) von Cytochrom c zu O2, wobei die eigentliche Reaktion am binuklearen Zentrum (CuB-Häm a3) erfolgt. Bei der Reduktion von O2 zu zwei H2O werden vier Protonen verbraucht. Zudem werden vier Protonen über die Membran transportiert, wodurch eine elektrochemische Potentialdifferenz dieser Ionen zwischen Matrix und Intermembranphase entsteht. Trotz ihrer Wichtigkeit sind Membranproteine wie die CcO noch wenig untersucht, weshalb auch der Mechanismus der Atmungskette noch nicht vollständig aufgeklärt ist. Das Ziel dieser Arbeit ist, einen Beitrag zum Verständnis der Funktion der CcO zu leisten. Hierzu wurde die CcO aus Rhodobacter sphaeroides über einen His-Anker, der am C-Terminus der Untereinheit II angebracht wurde, an eine funktionalisierte Metallelektrode in definierter Orientierung gebunden. Der erste Elektronenakzeptor, das CuA, liegt dabei am nächsten zur Metalloberfläche. Dann wurde eine Doppelschicht aus Lipiden insitu zwischen die gebundenen Proteine eingefügt, was zur sog. proteingebundenen Lipid-Doppelschicht Membran (ptBLM) führt. Dabei musste die optimale Oberflächenkonzentration der gebundenen Proteine herausgefunden werden. Elektrochemische Impedanzspektroskopie(EIS), Oberflächenplasmonenresonanzspektroskopie (SPR) und zyklische Voltammetrie (CV) wurden angewandt um die Aktivität der CcO als Funktion der Packungsdichte zu charakterisieren. Der Hauptteil der Arbeit betrifft die Untersuchung des direkten ET zur CcO unter anaeroben Bedingungen. Die Kombination aus zeitaufgelöster oberflächenverstärkter Infrarot-Absorptionsspektroskopie (tr-SEIRAS) und Elektrochemie hat sich dafür als besonders geeignet erwiesen. In einer ersten Studie wurde der ET mit Hilfe von fast scan CV untersucht, wobei CVs von nicht-aktivierter sowie aktivierter CcO mit verschiedenen Vorschubgeschwindigkeiten gemessen wurden. Die aktivierte Form wurde nach dem katalytischen Umsatz des Proteins in Anwesenheit von O2 erhalten. Ein vier-ET-modell wurde entwickelt um die CVs zu analysieren. Die Methode erlaubt zwischen dem Mechanismus des sequentiellen und des unabhängigen ET zu den vier Zentren CuA, Häm a, Häm a3 und CuB zu unterscheiden. Zudem lassen sich die Standardredoxpotentiale und die kinetischen Koeffizienten des ET bestimmen. In einer zweiten Studie wurde tr-SEIRAS im step scan Modus angewandt. Dafür wurden Rechteckpulse an die CcO angelegt und SEIRAS im ART-Modus verwendet um Spektren bei definierten Zeitscheiben aufzunehmen. Aus diesen Spektren wurden einzelne Banden isoliert, die Veränderungen von Vibrationsmoden der Aminosäuren und Peptidgruppen in Abhängigkeit des Redoxzustands der Zentren zeigen. Aufgrund von Zuordnungen aus der Literatur, die durch potentiometrische Titration der CcO ermittelt wurden, konnten die Banden versuchsweise den Redoxzentren zugeordnet werden. Die Bandenflächen gegen die Zeit aufgetragen geben dann die Redox-Kinetik der Zentren wieder und wurden wiederum mit dem vier-ET-Modell ausgewertet. Die Ergebnisse beider Studien erlauben die Schlussfolgerung, dass der ET zur CcO in einer ptBLM mit größter Wahrscheinlichkeit dem sequentiellen Mechanismus folgt, was dem natürlichen ET von Cytochrom c zur CcO entspricht.
Resumo:
In dieser Arbeit wird ein neuer Dynamikkern entwickelt und in das bestehendernnumerische Wettervorhersagesystem COSMO integriert. Für die räumlichernDiskretisierung werden diskontinuierliche Galerkin-Verfahren (DG-Verfahren)rnverwendet, für die zeitliche Runge-Kutta-Verfahren. Hierdurch ist ein Verfahrenrnhoher Ordnung einfach zu realisieren und es sind lokale Erhaltungseigenschaftenrnder prognostischen Variablen gegeben. Der hier entwickelte Dynamikkern verwendetrngeländefolgende Koordinaten in Erhaltungsform für die Orographiemodellierung undrnkoppelt das DG-Verfahren mit einem Kessler-Schema für warmen Niederschlag. Dabeirnwird die Fallgeschwindigkeit des Regens, nicht wie üblich implizit imrnKessler-Schema diskretisiert, sondern explizit im Dynamikkern. Hierdurch sindrndie Zeitschritte der Parametrisierung für die Phasenumwandlung des Wassers undrnfür die Dynamik vollständig entkoppelt, wodurch auch sehr große Zeitschritte fürrndie Parametrisierung verwendet werden können. Die Kopplung ist sowohl fürrnOperatoraufteilung, als auch für Prozessaufteilung realisiert.rnrnAnhand idealisierter Testfälle werden die Konvergenz und die globalenrnErhaltungseigenschaften des neu entwickelten Dynamikkerns validiert. Die Massernwird bis auf Maschinengenauigkeit global erhalten. Mittels Bergüberströmungenrnwird die Orographiemodellierung validiert. Die verwendete Kombination ausrnDG-Verfahren und geländefolgenden Koordinaten ermöglicht die Behandlung vonrnsteileren Bergen, als dies mit dem auf Finite-Differenzenverfahren-basierendenrnDynamikkern von COSMO möglich ist. Es wird gezeigt, wann die vollernTensorproduktbasis und wann die Minimalbasis vorteilhaft ist. Die Größe desrnEinflusses auf das Simulationsergebnis der Verfahrensordnung, desrnParametrisierungszeitschritts und der Aufteilungsstrategie wirdrnuntersucht. Zuletzt wird gezeigt dass bei gleichem Zeitschritt die DG-Verfahrenrnaufgrund der besseren Skalierbarkeit in der Laufzeit konkurrenzfähig zurnFinite-Differenzenverfahren sind.
Resumo:
Numerical simulation of the Oldroyd-B type viscoelastic fluids is a very challenging problem. rnThe well-known High Weissenberg Number Problem" has haunted the mathematicians, computer scientists, and rnengineers for more than 40 years. rnWhen the Weissenberg number, which represents the ratio of elasticity to viscosity, rnexceeds some limits, simulations done by standard methods break down exponentially fast in time. rnHowever, some approaches, such as the logarithm transformation technique can significantly improve rnthe limits of the Weissenberg number until which the simulations stay stable. rnrnWe should point out that the global existence of weak solutions for the Oldroyd-B model is still open. rnLet us note that in the evolution equation of the elastic stress tensor the terms describing diffusive rneffects are typically neglected in the modelling due to their smallness. However, when keeping rnthese diffusive terms in the constitutive law the global existence of weak solutions in two-space dimension rncan been shown. rnrnThis main part of the thesis is devoted to the stability study of the Oldroyd-B viscoelastic model. rnFirstly, we show that the free energy of the diffusive Oldroyd-B model as well as its rnlogarithm transformation are dissipative in time. rnFurther, we have developed free energy dissipative schemes based on the characteristic finite element and finite difference framework. rnIn addition, the global linear stability analysis of the diffusive Oldroyd-B model has also be discussed. rnThe next part of the thesis deals with the error estimates of the combined finite element rnand finite volume discretization of a special Oldroyd-B model which covers the limiting rncase of Weissenberg number going to infinity. Theoretical results are confirmed by a series of numerical rnexperiments, which are presented in the thesis, too.
Resumo:
Time domain analysis of electroencephalography (EEG) can identify subsecond periods of quasi-stable brain states. These so-called microstates assumingly correspond to basic units of cognition and emotion. On the other hand, Global Field Synchronization (GFS) is a frequency domain measure to estimate functional synchronization of brain processes on a global level for each EEG frequency band [Koenig, T., Lehmann, D., Saito, N., Kuginuki, T., Kinoshita, T., Koukkou, M., 2001. Decreased functional connectivity of EEG theta-frequency activity in first-episode, neuroleptic-naive patients with schizophrenia: preliminary results. Schizophr Res. 50, 55-60.]. Using these time and frequency domain analyzes, several previous studies reported shortened microstate duration in specific microstate classes and decreased GFS in theta band in drug naïve schizophrenia compared to controls. The purpose of this study was to investigate changes of these EEG parameters after drug treatment in drug naïve schizophrenia. EEG analysis was performed in 21 drug-naive patients and 21 healthy controls. 14 patients were reevaluated 2-8 weeks (mean 4.3) after the initiation of drug administration. The results extended findings of treatment effect on brain functions in schizophrenia, and imply that shortened duration of specific microstate classes seems a state marker especially in patients with later neuroleptic responsive, while lower theta GFS seems a state-related phenomenon and that higher gamma GFS is a trait like phenomenon.
Resumo:
Biogeochemical processes in the coastal region, including the coastal area of the Great Lakes, are of great importance due to the complex physical, chemical and biological characteristics that differ from those on either the adjoining land or open water systems. Particle-reactive radioisotopes, both naturally occurring (210Pb, 210Po and 7Be) and man-made (137Cs), have proven to be useful tracers for these processes in many systems. However, a systematic isotope study on the northwest coast of the Keweenaw Peninsula in Lake Superior has not yet been performed. In this dissertation research, field sampling, laboratory measurements and numerical modeling were conducted to understand the biogeochemistry of the radioisotope tracers and some particulate-related coastal processes. In the first part of the dissertation, radioisotope activities of 210Po and 210Pb in a variability of samples (dissolved, suspended particle, sediment trap materials, surficial sediment) were measured. A completed picture of the distribution and disequilibrium of this pair of isotopes was drawn. The application of a simple box model utilizing these field observations reveals short isotope residence times in the water column and a significant contribution of sediment resuspension (for both particles and isotopes). The results imply a highly dynamic coastal region. In the second part of this dissertation, this conclusion is examined further. Based on intensive sediment coring, the spatial distribution of isotope inventories (mainly 210Pb, 137Cs and 7Be) in the nearshore region was determined. Isotope-based focusing factors categorized most of the sampling sites as non- or temporary depositional zones. A twodimensional steady-state box-in-series model was developed and applied to individual transects with the 210Pb inventories as model input. The modeling framework included both water column and upper sediments down to the depth of unsupported 210Pb penetration. The model was used to predict isotope residence times and cross-margin fluxes of sediments and isotopes at different locations along each transect. The time scale for sediment focusing from the nearshore to offshore regions of the transect was on the order of 10 years. The possibility of sediment longshore movement was indicated by high inventory ratios of 137Cs: 210Pb. Local deposition of fine particles, including fresh organic carbon, may explain the observed distribution of benthic organisms such as Diporeia. In the last part of this dissertation, isotope tracers, 210Pb and 210Po, were coupled into a hydrodynamic model for Lake Superior. The model was modified from an existing 2-D finite difference physical-biological model which has previously been successfully applied on Lake Superior. Using the field results from part one of this dissertation as initial conditions, the model was used to predict the isotope distribution in the water column; reasonable results were achieved. The modeling experiments demonstrated the potential for using a hydrodynamic model to study radioisotope biogeochemistry in the lake, although further refinements are necessary.
Resumo:
This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure
Resumo:
This work explores the automatic recognition of physical activity intensity patterns from multi-axial accelerometry and heart rate signals. Data collection was carried out in free-living conditions and in three controlled gymnasium circuits, for a total amount of 179.80 h of data divided into: sedentary situations (65.5%), light-to-moderate activity (17.6%) and vigorous exercise (16.9%). The proposed machine learning algorithms comprise the following steps: time-domain feature definition, standardization and PCA projection, unsupervised clustering (by k-means and GMM) and a HMM to account for long-term temporal trends. Performance was evaluated by 30 runs of a 10-fold cross-validation. Both k-means and GMM-based approaches yielded high overall accuracy (86.97% and 85.03%, respectively) and, given the imbalance of the dataset, meritorious F-measures (up to 77.88%) for non-sedentary cases. Classification errors tended to be concentrated around transients, what constrains their practical impact. Hence, we consider our proposal to be suitable for 24 h-based monitoring of physical activity in ambulatory scenarios and a first step towards intensity-specific energy expenditure estimators
Resumo:
The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.
Resumo:
The analysis of deformation in soils is of paramount importance in geotechnical engineering. For a long time the complex behaviour of natural deposits defied the ingenuity of engineers. The time has come that, with the aid of computers, numerical methods will allow the solution of every problem if the material law can be specified with a certain accuracy. Boundary Techniques (B.E.) have recently exploded in a splendid flowering of methods and applications that compare advantegeously with other well-established procedures like the finite element method (F.E.). Its application to soil mechanics problems (Brebbia 1981) has started and will grow in the future. This paper tries to present a simple formulation to a classical problem. In fact, there is already a large amount of application of B.E. to diffusion problems (Rizzo et al, Shaw, Chang et al, Combescure et al, Wrobel et al, Roures et al, Onishi et al) and very recently the first specific application to consolidation problems has been published by Bnishi et al. Here we develop an alternative formulation to that presented in the last reference. Fundamentally the idea is to introduce a finite difference discretization in the time domain in order to use the fundamental solution of a Helmholtz type equation governing the neutral pressure distribution. Although this procedure seems to have been unappreciated in the previous technical literature it is nevertheless effective and straightforward to implement. Indeed for the special problem in study it is perfectly suited, because a step by step interaction between the elastic and flow problems is needed. It allows also the introduction of non-linear elastic properties and time dependent conditions very easily as will be shown and compares well with performances of other approaches.