16 resultados para Tibetan coded character set extension A
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
The crustal and lithospheric mantle structure at the south segment of the west Iberian margin was investigated along a 370 km long seismic transect. The transect goes from unthinned continental crust onshore to oceanic crust, crossing the ocean-continent transition (OCT) zone. The wide-angle data set includes recordings from 6 OBSs and 2 inland seismic stations. Kinematic and dynamic modeling provided a 2D velocity model that proved to be consistent with the modeled free-air anomaly data. The interpretation of coincident multi-channel near-vertical and wide-angle reflection data sets allowed the identification of four main crustal domains: (i) continental (east of 9.4 degrees W); (ii) continental thinning (9.4 degrees W-9.7 degrees W): (iii) transitional (9.7 degrees W-similar to 10.5 degrees W); and (iv) oceanic (west of similar to 10.5 degrees W). In the continental domain the complete crustal section of slightly thinned continental crust is present. The upper (UCC, 5.1-6.0 km/s) and the lower continental crust (LCC, 6.9-7.2 km/s) are seismically reflective and have intermediate to low P-wave velocity gradients. The middle continental crust (MCC, 6.35-6.45 km/s) is generally unreflective with low velocity gradient. The main thinning of the continental crust occurs in the thinning domain by attenuation of the UCC and the LCC. Major thinning of the MCC starts to the west of the LCC pinchout point, where it rests directly upon the mantle. In the thinning domain the Moho slope is at least 13 degrees and the continental crust thickness decreases seaward from 22 to 11 km over a similar to 35 km distance, stretched by a factor of 1.5 to 3. In the oceanic domain a two-layer high-gradient igneous crust (5.3-6.0 km/s; 6.5-7.4 km/s) was modeled. The intra-crustal interface correlates with prominent mid-basement, 10-15 km long reflections in the multi-channel seismic profile. Strong secondary reflected PmP phases require a first order discontinuity at the Moho. The sedimentary cover can be as thick as 5 km and the igneous crustal thickness varies from 4 to 11 km in the west, where the profile reaches the Madeira-Tore Rise. In the transitional domain the crust has a complex structure that varies both horizontally and vertically. Beneath the continental slope it includes exhumed continental crust (6.15-6.45 km/s). Strong diffractions were modeled to originate at the lower interface of this layer. The western segment of this transitional domain is highly reflective at all levels, probably due to dykes and sills, according to the high apparent susceptibility and density modeled at this location. Sub-Moho mantle velocity is found to be 8.0 km/s, but velocities smaller than 8.0 km/s confined to short segments are not excluded by the data. Strong P-wave wide-angle reflections are modeled to originate at depth of 20 km within the lithospheric mantle, under the eastern segment of the oceanic domain, or even deeper at the transitional domain, suggesting a layered structure for the lithospheric mantle. Both interface depths and velocities of the continental section are in good agreement to the conjugate Newfoundland margin. A similar to 40 km wide OCT having a geophysical signature distinct from the OCT to the north favors a two pulse continental breakup.
Resumo:
In practical applications of optimization it is common to have several conflicting objective functions to optimize. Frequently, these functions are subject to noise or can be of black-box type, preventing the use of derivative-based techniques. We propose a novel multiobjective derivative-free methodology, calling it direct multisearch (DMS), which does not aggregate any of the objective functions. Our framework is inspired by the search/poll paradigm of direct-search methods of directional type and uses the concept of Pareto dominance to maintain a list of nondominated points (from which the new iterates or poll centers are chosen). The aim of our method is to generate as many points in the Pareto front as possible from the polling procedure itself, while keeping the whole framework general enough to accommodate other disseminating strategies, in particular, when using the (here also) optional search step. DMS generalizes to multiobjective optimization (MOO) all direct-search methods of directional type. We prove under the common assumptions used in direct search for single objective optimization that at least one limit point of the sequence of iterates generated by DMS lies in (a stationary form of) the Pareto front. However, extensive computational experience has shown that our methodology has an impressive capability of generating the whole Pareto front, even without using a search step. Two by-products of this paper are (i) the development of a collection of test problems for MOO and (ii) the extension of performance and data profiles to MOO, allowing a comparison of several solvers on a large set of test problems, in terms of their efficiency and robustness to determine Pareto fronts.
Resumo:
In this work a new probabilistic and dynamical approach to an extension of the Gompertz law is proposed. A generalized family of probability density functions, designated by Beta* (p, q), which is proportional to the right hand side of the Tsoularis-Wallace model, is studied. In particular, for p = 2, the investigation is extended to the extreme value models of Weibull and Frechet type. These models, described by differential equations, are proportional to the hyper-Gompertz growth model. It is proved that the Beta* (2, q) densities are a power of betas mixture, and that its dynamics are determined by a non-linear coupling of probabilities. The dynamical analysis is performed using techniques of symbolic dynamics and the system complexity is measured using topological entropy. Generally, the natural history of a malignant tumour is reflected through bifurcation diagrams, in which are identified regions of regression, stability, bifurcation, chaos and terminus.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica - Ramo Manutenção e Produção
Resumo:
We use a two-dimensional (2D) elastic free energy to calculate the effective interaction between two circular disks immersed in smectic-C films. For strong homeotropic anchoring, the distortion of the director field caused by the disks generates topological defects that induce an effective interaction between the disks. We use finite elements, with adaptive meshing, to minimize the 2D elastic free energy. The method is shown to be accurate and efficient for inhomogeneities on the length scales set by the disks and the defects, that differ by up to 3 orders of magnitude. We compute the effective interaction between two disk-defect pairs in a simple (linear) configuration. For large disk separations, D, the elastic free energy scales as similar to D-2, confirming the dipolar character of the long-range effective interaction. For small D the energy exhibits a pronounced minimum. The lowest energy corresponds to a symmetrical configuration of the disk-defect pairs, with the inner defect at the mid-point between the disks. The disks are separated by a distance that, is twice the distance of the outer defect from the nearest disk. The latter is identical to the equilibrium distance of a defect nucleated by an isolated disk.
Resumo:
Electrocardiographic (ECG) signals are emerging as a recent trend in the field of biometrics. In this paper, we propose a novel ECG biometric system that combines clustering and classification methodologies. Our approach is based on dominant-set clustering, and provides a framework for outlier removal and template selection. It enhances the typical workflows, by making them better suited to new ECG acquisition paradigms that use fingers or hand palms, which lead to signals with lower signal to noise ratio, and more prone to noise artifacts. Preliminary results show the potential of the approach, helping to further validate the highly usable setups and ECG signals as a complementary biometric modality.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Jornalismo.
Resumo:
Dissertação apresentada à Escola Superior de Educação de Lisboa para obtenção de grau de mestre em Educação Especial, domínio Cognição e Multideficiência
Resumo:
It is important to understand and forecast a typical or a particularly household daily consumption in order to design and size suitable renewable energy systems and energy storage. In this research for Short Term Load Forecasting (STLF) it has been used Artificial Neural Networks (ANN) and, despite the consumption unpredictability, it has been shown the possibility to forecast the electricity consumption of a household with certainty. The ANNs are recognized to be a potential methodology for modeling hourly and daily energy consumption and load forecasting. Input variables such as apartment area, numbers of occupants, electrical appliance consumption and Boolean inputs as hourly meter system were considered. Furthermore, the investigation carried out aims to define an ANN architecture and a training algorithm in order to achieve a robust model to be used in forecasting energy consumption in a typical household. It was observed that a feed-forward ANN and the Levenberg-Marquardt algorithm provided a good performance. For this research it was used a database with consumption records, logged in 93 real households, in Lisbon, Portugal, between February 2000 and July 2001, including both weekdays and weekend. The results show that the ANN approach provides a reliable model for forecasting household electric energy consumption and load profile. © 2014 The Author.
Resumo:
We investigate the structural and thermodynamic properties of a model of particles with 2 patches of type A and 10 patches of type B. Particles are placed on the sites of a face centered cubic lattice with the patches oriented along the nearest neighbor directions. The competition between the self- assembly of chains, rings, and networks on the phase diagram is investigated by carrying out a systematic investigation of this class of models, using an extension ofWertheim's theory for associating fluids and Monte Carlo numerical simulations. We varied the ratio r epsilon(AB)/epsilon(AA) of the interaction between patches A and B, epsilon(AB), and between A patches, epsilon(AA) (epsilon(BB) is set to theta) as well as the relative position of the A patches, i.e., the angle. between the (lattice) directions of the A patches. We found that both r and theta (60 degrees, 90 degrees, or 120 degrees) have a profound effect on the phase diagram. In the empty fluid regime (r < 1/2) the phase diagram is reentrant with a closed miscibility loop. The region around the lower critical point exhibits unusual structural and thermodynamic behavior determined by the presence of relatively short rings. The agreement between the results of theory and simulation is excellent for theta = 120 degrees but deteriorates as. decreases, revealing the need for new theoretical approaches to describe the structure and thermodynamics of systems dominated by small rings. (C) 2014 AIP Publishing LLC.
Resumo:
Energy efficiency plays an important role to the CO2 emissions reduction, combating climate change and improving the competitiveness of the economy. The problem presented here is related to the use of stand-alone diesel gen-sets and its high specific fuel consumptions when operates at low loads. The variable speed gen-set concept is explained as an energy-saving solution to improve this system efficiency. This paper details how an optimum fuel consumption trajectory based on experimentally Diesel engine power map is obtained.
Resumo:
In this paper a new method for self-localization of mobile robots, based on a PCA positioning sensor to operate in unstructured environments, is proposed and experimentally validated. The proposed PCA extension is able to perform the eigenvectors computation from a set of signals corrupted by missing data. The sensor package considered in this work contains a 2D depth sensor pointed upwards to the ceiling, providing depth images with missing data. The positioning sensor obtained is then integrated in a Linear Parameter Varying mobile robot model to obtain a self-localization system, based on linear Kalman filters, with globally stable position error estimates. A study consisting in adding synthetic random corrupted data to the captured depth images revealed that this extended PCA technique is able to reconstruct the signals, with improved accuracy. The self-localization system obtained is assessed in unstructured environments and the methodologies are validated even in the case of varying illumination conditions.
Resumo:
Trabalho de Projeto submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Design de Cena.
Resumo:
Materials selection is a matter of great importance to engineering design and software tools are valuable to inform decisions in the early stages of product development. However, when a set of alternative materials is available for the different parts a product is made of, the question of what optimal material mix to choose for a group of parts is not trivial. The engineer/designer therefore goes about this in a part-by-part procedure. Optimizing each part per se can lead to a global sub-optimal solution from the product point of view. An optimization procedure to deal with products with multiple parts, each with discrete design variables, and able to determine the optimal solution assuming different objectives is therefore needed. To solve this multiobjective optimization problem, a new routine based on Direct MultiSearch (DMS) algorithm is created. Results from the Pareto front can help the designer to align his/hers materials selection for a complete set of materials with product attribute objectives, depending on the relative importance of each objective.
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.