967 resultados para Palaeomagnetism Applied to Tectonics
Resumo:
The carbonated sedimentation in Ossa-Morena Zone during the Palaeozoic is formed, at least, by two main episodes. However, some chronological questions remain open, due to lack of biostratigraphic data in some carbonates. Sr isotope analysis was performed in selected limestones and marbles of Ossa-Morena Zone, in order to discriminate the Sr signature of the two main carbonate sedimentation episodes. The Sr isotopic data from the analyzed carbonate show two clusters of 87Sr/86Sr ratios, one related with the Lower Cambrian and other with the Lower-Middle Devonian carbonates.
Resumo:
This paper analyzes the concept of constructive paranoia stated by journalist and author Andrés Oppenheimer to promote development in Latin America. Based on that concept, this paper discusses the effectiveness of current English Language Teaching, particularly, as well as what should be done in order to obtain better results. As a conclusion, a re-structure of approach, curriculum and methodology in teaching the language is proposed.
Resumo:
The Simple Algorithm for Evapotranspiration Retrieving (SAFER) was used to estimate biophysical parameters and theenergy balance components in two different pasture experimental areas, in the São Paulo state, Brazil. The experimentalpastures consist in six rotational (RGS) and three continuous grazing systems (CGS) paddocks. Landsat-8 images from2013 and 2015 dry and rainy seasons were used, as these presented similar hydrological cycle, with 1,600 mm and 1,613mm of annual precipitation, resulting in 19 cloud-free images. Bands 1 to 7 and thermal bands 10 and 11 were used withweather data from a station located nearthe experimental area. NDVI, biomass, evapotranspiration and latent heat flux(λE) temporal values statistically differ CGS from RGS areas. Grazing systems influences the energy partition and theseresults indicate that RGS benefits biomass production, evapotranspiration and the microclimate, due higher LE values.SAFER is a feasible tool to estimate biophysical parameters and energy balance components in pasture and has potentialto discriminate continuous and rotation grazing systems in a temporal analysis.
Resumo:
The application of Computational Fluid Dynamics based on the Reynolds-Averaged Navier-Stokes equations to the simulation of bluff body aerodynamics has been thoroughly investigated in the past. Although a satisfactory accuracy can be obtained for some urban physics problems their predictive capability is limited to the mean flow properties, while the ability to accurately predict turbulent fluctuations is recognized to be of fundamental importance when dealing with wind loading and pollution dispersion problems. The need to correctly take into account the flow dynamics when such problems are faced has led researchers to move towards scale-resolving turbulence models such as Large Eddy Simulations (LES). The development and assessment of LES as a tool for the analysis of these problems is nowadays an active research field and represents a demanding engineering challenge. This research work has two objectives. The first one is focused on wind loads assessment and aims to study the capabilities of LES in reproducing wind load effects in terms of internal forces on structural members. This differs from the majority of the existing research, where performance of LES is evaluated only in terms of surface pressures, and is done with a view of adopting LES as a complementary design tools alongside wind tunnel tests. The second objective is the study of LES capabilities in calculating pollutant dispersion in the built environment. The validation of LES in this field is considered to be of the utmost importance in order to conceive healthier and more sustainable cities. In order to validate the numerical setup adopted, a systematic comparison between numerical and experimental data is performed. The obtained results are intended to be used in the drafting of best practice guidelines for the application of LES in the urban physics field with a particular attention to wind load assessment and pollution dispersion problems.
Resumo:
The design optimization of industrial products has always been an essential activity to improve product quality while reducing time-to-market and production costs. Although cost management is very complex and comprises all phases of the product life cycle, the control of geometrical and dimensional variations, known as Dimensional Management (DM), allows compliance with product and process requirements. Hence, the tolerance-cost optimization becomes the main practice to provide an effective application of Design for Tolerancing (DfT) and Design to Cost (DtC) approaches by enabling a connection between product tolerances and associated manufacturing costs. However, despite the growing interest in this topic, a profitable application in the industry of these techniques is hampered by their complexity: the definition of a systematic framework is the key element to improving design optimization, enhancing the concurrent use of Computer-Aided tools and Model-Based Definition (MBD) practices. The present doctorate research aims to define and develop an integrated methodology for product/process design optimization, to better exploit the new capabilities of advanced simulations and tools. By implementing predictive models and multi-disciplinary optimization, a Computer-Aided Integrated framework for tolerance-cost optimization has been proposed to allow the integration of DfT and DtC approaches and their direct application for the design of automotive components. Several case studies have been considered, with the final application of the integrated framework on a high-performance V12 engine assembly, to achieve both functional targets and cost reduction. From a scientific point of view, the proposed methodology provides an improvement for the tolerance-cost optimization of industrial components. The integration of theoretical approaches and Computer-Aided tools allows to analyse the influence of tolerances on both product performance and manufacturing costs. The case studies proved the suitability of the methodology for its application in the industrial field, providing the identification of further areas for improvement and refinement.
Resumo:
Nowadays, cities deal with unprecedented pollution and overpopulation problems, and Internet of Things (IoT) technologies are supporting them in facing these issues and becoming increasingly smart. IoT sensors embedded in public infrastructure can provide granular data on the urban environment, and help public authorities to make their cities more sustainable and efficient. Nonetheless, this pervasive data collection also raises high surveillance risks, jeopardizing privacy and data protection rights. Against this backdrop, this thesis addresses how IoT surveillance technologies can be implemented in a legally compliant and ethically acceptable fashion in smart cities. An interdisciplinary approach is embraced to investigate this question, combining doctrinal legal research (on privacy, data protection, criminal procedure) with insights from philosophy, governance, and urban studies. The fundamental normative argument of this work is that surveillance constitutes a necessary feature of modern information societies. Nonetheless, as the complexity of surveillance phenomena increases, there emerges a need to develop more fine-attuned proportionality assessments to ensure a legitimate implementation of monitoring technologies. This research tackles this gap from different perspectives, analyzing the EU data protection legislation and the United States and European case law on privacy expectations and surveillance. Specifically, a coherent multi-factor test assessing privacy expectations in public IoT environments and a surveillance taxonomy are proposed to inform proportionality assessments of surveillance initiatives in smart cities. These insights are also applied to four use cases: facial recognition technologies, drones, environmental policing, and smart nudging. Lastly, the investigation examines competing data governance models in the digital domain and the smart city, reviewing the EU upcoming data governance framework. It is argued that, despite the stated policy goals, the balance of interests may often favor corporate strategies in data sharing, to the detriment of common good uses of data in the urban context.
Resumo:
The study of ancient, undeciphered scripts presents unique challenges, that depend both on the nature of the problem and on the peculiarities of each writing system. In this thesis, I present two computational approaches that are tailored to two different tasks and writing systems. The first of these methods is aimed at the decipherment of the Linear A afraction signs, in order to discover their numerical values. This is achieved with a combination of constraint programming, ad-hoc metrics and paleographic considerations. The second main contribution of this thesis regards the creation of an unsupervised deep learning model which uses drawings of signs from ancient writing system to learn to distinguish different graphemes in the vector space. This system, which is based on techniques used in the field of computer vision, is adapted to the study of ancient writing systems by incorporating information about sequences in the model, mirroring what is often done in natural language processing. In order to develop this model, the Cypriot Greek Syllabary is used as a target, since this is a deciphered writing system. Finally, this unsupervised model is adapted to the undeciphered Cypro-Minoan and it is used to answer open questions about this script. In particular, by reconstructing multiple allographs that are not agreed upon by paleographers, it supports the idea that Cypro-Minoan is a single script and not a collection of three script like it was proposed in the literature. These results on two different tasks shows that computational methods can be applied to undeciphered scripts, despite the relatively low amount of available data, paving the way for further advancement in paleography using these methods.
Resumo:
The project aims to gather an understanding of additive manufacturing and other manufacturing 4.0 techniques with an eyesight for industrialization. First the internal material anisotropy of elements created with the most economically feasible FEM technique was established. An understanding of the main drivers for variability for AM was portrayed, with the focus on achieving material internal isotropy. Subsequently, a technique for deposition parameter optimization was presented, further procedure testing was performed following other polymeric materials and composites. A replicability assessment by means of the use of technology 4.0 was proposed, and subsequent industry findings gathered the ultimate need of developing a process that demonstrate how to re-engineer designs in order to show the best results with AM processing. The latest study aims to apply the Industrial Design and Structure Method (IDES) and applying all the knowledge previously stacked into fully reengineer a product with focus of applying tools from 4.0 era, from product feasibility studies, until CAE – FEM analysis and CAM – DfAM. These results would help in making AM and FDM processes a viable option to be combined with composites technologies to achieve a reliable, cost-effective manufacturing method that could also be used for mass market, industry applications.
Root cause analysis applied to a finite element model's refinement of a negative stiffness structure
Resumo:
Negative Stiffness Structures are mechanical systems that require a decrease in the applied force to generate an increase in displacement. They are structures that possess special characteristics such as snap-through and bi-stability. All of these features make them particularly suitable for different applications, such as shock-absorption, vibration isolation and damping. From this point of view, they have risen awareness of their characteristics and, in order to match them to the application needed, a numerical simulation is of great interest. In this regard, this thesis is a continuation of previous studies in a circular negative stiffness structure and aims at refine the numerical model by presenting a new solution. To that end, an investigation procedure is needed. Amongst all of the methods available, root cause analysis was the chosen one to perform the investigation since it provides a clear view of the problem under analysis and a categorization of all the causes behind it. As a result of the cause-effect analysis, the main causes that have influence on the numerical results were obtained. Once all of the causes were listed, solutions to them were proposed and it led to a new numerical model. The numerical model proposed was of nonlinear type of analysis with hexagonal elements and a hyperelastic material model. The results were analyzed through force-displacement curves, allowing for the visualization of the structure’s energy recovery. When compared to the results obtained from the experimental part, it is evident that the trend is similar and the negative stiffness behaviour is present.
Resumo:
Radio Simultaneous Location and Mapping (SLAM) consists of the simultaneous tracking of the target and estimation of the surrounding environment, to build a map and estimate the target movements within it. It is an increasingly exploited technique for automotive applications, in order to improve the localization of obstacles and the target relative movement with respect to them, for emergency situations, for example when it is necessary to explore (with a drone or a robot) environments with a limited visibility, or for personal radar applications, thanks to its versatility and cheapness. Until today, these systems were based on light detection and ranging (lidar) or visual cameras, high-accuracy and expensive approaches that are limited to specific environments and weather conditions. Instead, in case of smoke, fog or simply darkness, radar-based systems can operate exactly in the same way. In this thesis activity, the Fourier-Mellin algorithm is analyzed and implemented, to verify the applicability to Radio SLAM, in which the radar frames can be treated as images and the radar motion between consecutive frames can be covered with registration. Furthermore, a simplified version of that algorithm is proposed, in order to solve the problems of the Fourier-Mellin algorithm when working with real radar images and improve the performance. The INRAS RBK2, a MIMO 2x16 mmWave radar, is used for experimental acquisitions, consisting of multiple tests performed in Lab-E of the Cesena Campus, University of Bologna. The different performances of Fourier-Mellin and its simplified version are compared also with the MatchScan algorithm, a classic algorithm for SLAM systems.
Resumo:
In the field of Power Electronics, several types of motor control systems have been developed using STM microcontroller and power boards. In both industrial power applications and domestic appliances, power electronic inverters are widely used. Inverters are used to control the torque, speed, and position of the rotor in AC motor drives. An inverter delivers constant-voltage and constant-frequency power in uninterruptible power sources. Because inverter power supplies have a high-power consumption and low transfer efficiency rate, a three-phase sine wave AC power supply was created using the embedded system STM32, which has low power consumption and efficient speed. It has the capacity of output frequency of 50 Hz and the RMS of line voltage. STM32 embedded based Inverter is a power supply that integrates, reduced, and optimized the power electronics application that require hardware system, software, and application solution, including power architecture, techniques, and tools, approaches capable of performance on devices and equipment. Power inverters are currently used and implemented in green energy power system with low energy system such as sensors or microcontroller to perform the operating function of motors and pumps. STM based power inverter is efficient, less cost and reliable. My thesis work was based on STM motor drives and control system which can be implemented in a gas analyser for operating the pumps and motors. It has been widely applied in various engineering sectors due to its ability to respond to adverse structural changes and improved structural reliability. The present research was designed to use STM Inverter board on low power MCU such as NUCLEO with some practical examples such as Blinking LED, and PWM. Then we have implemented a three phase Inverter model with Steval-IPM08B board, which converter single phase 230V AC input to three phase 380 V AC output, the output will be useful for operating the induction motor.
Resumo:
A detailed magnetostratigraphic and rock-magnetism study of two Late Palaeozoic rhythmite exposures (Itu and Rio do Sul) from the Itarare Group (Parana Basin, Brazil) is presented in this paper. After stepwise alterning-field procedures and thermal cleaning were performed, samples from both collections show reversed characteristic magnetization components, which is expected for Late Palaeozoic rocks. However, the Itu rocks presented an odd, flat inclination pattern that could not be corrected with mathematical methods based on the virtual geomagnetic pole (VGP) distributions. Correlation tests between the maximum anisotropy of the magnetic susceptibility axis (K1) and the magnetic declination indicated a possible mechanical influence on the remanence acquisition. The Rio do Sul sequence displayed medium to high inclinations and provided a high-quality palaeomagnetic pole (after shallowing corrections of f = 0.8) of 347.5 degrees E 63.2 degrees S (N = 119; A95 = 3.3; K = 31), which is in accordance with the Palaeozoic apparent wander pole path of South America. The angular dispersion (Sb) for the distribution of the VGPs calculated on the basis of both the 45 degrees cut-off angle and Vandamme method was compared to the best-fit Model G for mid-latitudes. Both of the Sb results are in reasonable agreement with the predicted (palaeo) latitudinal S-? relationship during the Cretaceous Normal Superchron (CNS), although the Sb value after the Vandamme cut-off has been applied is a little lower than expected. This result, in addition to those for low palaeolatitudes during the Permo-Carboniferous Reversed Superchron (PCRS) previously reported, indicates that the low secular variation regime for the geodynamo that has already been discovered in the CNS might have also been predominant during the PCRS.
Resumo:
Rock phosphates have low solubility in water, but good solubility in acid. The use of organic compounds together with these phosphorus sources applied to the basal leaf axils of pineapple can increase the solubility of this phosfate source and increase the P availability to the crop. A greenhouse experiment was conducted using Araxá rock phosphate (10 g) in combination or not with solutions containing increasing concentrations of humic acids (0 to 40 mmol L-1 of carbon), with or without citric acid (0.005 mmol L-1), applied to basal leaf axils of pineapple cv. Pérola. Growth and nutritional characteristics of aerial plant parts were assessed. Growth rates of aerial parts and N, P, K, Ca and Mg contents increased curvilinearly with increasing concentration of carbon in the form of humic acids. Maximum values were found for the concentration of 9.3 mmol L-1 of carbon combined with 0.005 mmol L-1 of citric acid and natural phosphate.
Resumo:
Currently, the quality of the Indonesian national road network is inadequate due to several constraints, including overcapacity and overloaded trucks. The high deterioration rate of the road infrastructure in developing countries along with major budgetary restrictions and high growth in traffic have led to an emerging need for improving the performance of the highway maintenance system. However, the high number of intervening factors and their complex effects require advanced tools to successfully solve this problem. The high learning capabilities of Data Mining (DM) are a powerful solution to this problem. In the past, these tools have been successfully applied to solve complex and multi-dimensional problems in various scientific fields. Therefore, it is expected that DM can be used to analyze the large amount of data regarding the pavement and traffic, identify the relationship between variables, and provide information regarding the prediction of the data. In this paper, we present a new approach to predict the International Roughness Index (IRI) of pavement based on DM techniques. DM was used to analyze the initial IRI data, including age, Equivalent Single Axle Load (ESAL), crack, potholes, rutting, and long cracks. This model was developed and verified using data from an Integrated Indonesia Road Management System (IIRMS) that was measured with the National Association of Australian State Road Authorities (NAASRA) roughness meter. The results of the proposed approach are compared with the IIRMS analytical model adapted to the IRI, and the advantages of the new approach are highlighted. We show that the novel data-driven model is able to learn (with high accuracy) the complex relationships between the IRI and the contributing factors of overloaded trucks
Resumo:
We present an overview of the knowledge of the structure and the seismic behavior of the Alhama de Murcia Fault (AMF). We utilize a fault traces map created from a LIDAR DEM combined with the geodynamic setting, the analysis of the morphology, the distribution of seismicity, the geological information from E 1:50000 geological maps and the available paleoseismic data to describe the recent activity of the AMF. We discuss the importance of uncertainties regarding the structure and kinematics of the AMF applied to the interpretation and spatial correlation of the paleoseismic data. In particular, we discuss the nature of the faults dipping to the SE (antithetic to the main faults of the AMF) in several segments that have been studied in the previous paleoseismic works. A special chapter is dedicated to the analysis of the tectonic source of the Lorca 2011 earthquake that took place in between two large segments of the fault.