926 resultados para Analogous ground
Resumo:
We study an elliptic system of the form Lu = vertical bar v vertical bar(p-1) v and Lv = vertical bar u vertical bar(q-1) u in Omega with homogeneous Dirichlet boundary condition, where Lu := -Delta u in the case of a bounded domain and Lu := -Delta u + u in the cases of an exterior domain or the whole space R-N. We analyze the existence, uniqueness, sign and radial symmetry of ground state solutions and also look for sign changing solutions of the system. More general non-linearities are also considered.
Resumo:
Unstable shoes have been designed to promote "natural instability" and during walking they should simulate barefoot gait, enhancing muscle activity and, thus, attributing an advantage over regular tennis shoes. Recent studies showed that, after special training on the appropriate walking pattern, the use of the Masai Barefoot Technology (MBT) shoe increases muscle activation during walking. Our study presents a comparison of muscle activity as well as horizontal and vertical forces during gait with the MBT, a standard tennis shoe and barefoot walking of healthy individuals without previous training. These variables were compared in 25 female subjects and gait conditions were compared using ANOVA repeated measures (effect size:0.25). Walking with the MBT shoe in this non-instructed condition produced higher vertical forces (first vertical peak and weight acceptance rate) than walking with a standard shoe or walking barefoot, which suggests an increase in the loads received by the musculoskeletal system, especially at heel strike. Walking with the MBT shoe did not increase muscle activity when compared to walking with the standard shoe. The barefoot condition was more effective than the MBT shoe at enhancing muscle activation. Therefore, in healthy individuals, no advantage was found in using the MBT over a standard tennis shoe without a special training period. Further studies using the MBT without any instruction over a longer period are needed to evaluate if the higher loads observed in the present study would return to their baseline values after a period of adaptation, and if the muscle activity would increase over time. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
We report on charmonium measurements [J/psi (1S), psi' (2S), and chi(c) (1P)] in p + p collisions at root s = 200 GeV. We find that the fraction of J/psi coming from the feed-down decay of psi' and chi(c) in the midrapidity region (vertical bar y vertical bar < 0: 35) is 9.6 +/- 2.4% and 32 +/- 9%, respectively. We also present the p(T) and rapidity dependencies of the J/psi yield measured via dielectron decay at midrapidity (vertical bar y vertical bar < 0.35) and via dimuon decay at forward rapidity (1.2 < vertical bar y vertical bar < 2.2). The statistical precision greatly exceeds that reported in our previous publication [Phys. Rev. Lett. 98, 232002 (2007)]. The new results are compared with other experiments and discussed in the context of current charmonium production models.
Resumo:
A neural network model to predict ozone concentration in the Sao Paulo Metropolitan Area was developed, based on average values of meteorological variables in the morning (8:00-12:00 hr) and afternoon (13:00-17: 00 hr) periods. Outputs are the maximum and average ozone concentrations in the afternoon (12:00-17:00 hr). The correlation coefficient between computed and measured values was 0.82 and 0.88 for the maximum and average ozone concentration, respectively. The model presented good performance as a prediction tool for the maximum ozone concentration. For prediction periods from 1 to 5 days 0 to 23% failures (95% confidence) were obtained.
Resumo:
OBJECTIVE: To analyze and compare the vertical component of ground reaction forces and isokinetic muscle parameters for plantar flexion and dorsiflexion of the ankle between long-distance runners, triathletes, and nonathletes. METHODS: Seventy-five males with a mean age of 30.26 (±6.5) years were divided into three groups: a triathlete group (n=26), a long-distance runner group (n = 23), and a non-athlete control group. The kinetic parameters were measured during running using a force platform, and the isokinetic parameters were measured using an isokinetic dynamometer. RESULTS: The non-athlete control group and the triathlete group exhibited smaller vertical forces, a greater ground contact time, and a greater application of force during maximum vertical acceleration than the long-distance runner group. The total work (180º/s) was greater in eccentric dorsiflexion and concentric plantar flexion for the non-athlete control group and the triathlete group than the long-distance runner group. The peak torque (60º/s) was greater in eccentric plantar flexion and concentric dorsiflexion for the control group than the athlete groups. CONCLUSIONS: The athlete groups exhibited less muscle strength and resistance than the control group, and the triathletes exhibited less impact and better endurance performance than the runners.
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
The research is part of a survey for the detection of the hydraulic and geotechnical conditions of river embankments funded by the Reno River Basin Regional Technical Service of the Region Emilia-Romagna. The hydraulic safety of the Reno River, one of the main rivers in North-Eastern Italy, is indeed of primary importance to the Emilia-Romagna regional administration. The large longitudinal extent of the banks (several hundreds of kilometres) has placed great interest in non-destructive geophysical methods, which, compared to other methods such as drilling, allow for the faster and often less expensive acquisition of high-resolution data. The present work aims to experience the Ground Penetrating Radar (GPR) for the detection of local non-homogeneities (mainly stratigraphic contacts, cavities and conduits) inside the Reno River and its tributaries embankments, taking into account supplementary data collected with traditional destructive tests (boreholes, cone penetration tests etc.). A comparison with non-destructive methodologies likewise electric resistivity tomography (ERT), Multi-channels Analysis of Surface Waves (MASW), FDEM induction, was also carried out in order to verify the usability of GPR and to provide integration of various geophysical methods in the process of regular maintenance and check of the embankments condition. The first part of this thesis is dedicated to the explanation of the state of art concerning the geographic, geomorphologic and geotechnical characteristics of Reno River and its tributaries embankments, as well as the description of some geophysical applications provided on embankments belonging to European and North-American Rivers, which were used as bibliographic basis for this thesis realisation. The second part is an overview of the geophysical methods that were employed for this research, (with a particular attention to the GPR), reporting also their theoretical basis and a deepening of some techniques of the geophysical data analysis and representation, when applied to river embankments. The successive chapters, following the main scope of this research that is to highlight advantages and drawbacks in the use of Ground Penetrating Radar applied to Reno River and its tributaries embankments, show the results obtained analyzing different cases that could yield the formation of weakness zones, which successively lead to the embankment failure. As advantages, a considerable velocity of acquisition and a spatial resolution of the obtained data, incomparable with respect to other methodologies, were recorded. With regard to the drawbacks, some factors, related to the attenuation losses of wave propagation, due to different content in clay, silt, and sand, as well as surface effects have significantly limited the correlation between GPR profiles and geotechnical information and therefore compromised the embankment safety assessment. Recapitulating, the Ground Penetrating Radar could represent a suitable tool for checking up river dike conditions, but its use has significantly limited by geometric and geotechnical characteristics of the Reno River and its tributaries levees. As a matter of facts, only the shallower part of the embankment was investigate, achieving also information just related to changes in electrical properties, without any numerical measurement. Furthermore, GPR application is ineffective for a preliminary assessment of embankment safety conditions, while for detailed campaigns at shallow depth, which aims to achieve immediate results with optimal precision, its usage is totally recommended. The cases where multidisciplinary approach was tested, reveal an optimal interconnection of the various geophysical methodologies employed, producing qualitative results concerning the preliminary phase (FDEM), assuring quantitative and high confidential description of the subsoil (ERT) and finally, providing fast and highly detailed analysis (GPR). Trying to furnish some recommendations for future researches, the simultaneous exploitation of many geophysical devices to assess safety conditions of river embankments is absolutely suggested, especially to face reliable flood event, when the entire extension of the embankments themselves must be investigated.
Resumo:
Human reasoning is a fascinating and complex cognitive process that can be applied in different research areas such as philosophy, psychology, laws and financial. Unfortunately, developing supporting software (to those different areas) able to cope such as complex reasoning it’s difficult and requires a suitable logic abstract formalism. In this thesis we aim to develop a program, that has the job to evaluate a theory (a set of rules) w.r.t. a Goal, and provide some results such as “The Goal is derivable from the KB5 (of the theory)”. In order to achieve this goal we need to analyse different logics and choose the one that best meets our needs. In logic, usually, we try to determine if a given conclusion is logically implied by a set of assumptions T (theory). However, when we deal with programming logic we need an efficient algorithm in order to find such implications. In this work we use a logic rather similar to human logic. Indeed, human reasoning requires an extension of the first order logic able to reach a conclusion depending on not definitely true6 premises belonging to a incomplete set of knowledge. Thus, we implemented a defeasible logic7 framework able to manipulate defeasible rules. Defeasible logic is a non-monotonic logic designed for efficient defeasible reasoning by Nute (see Chapter 2). Those kind of applications are useful in laws area especially if they offer an implementation of an argumentation framework that provides a formal modelling of game. Roughly speaking, let the theory is the set of laws, a keyclaim is the conclusion that one of the party wants to prove (and the other one wants to defeat) and adding dynamic assertion of rules, namely, facts putted forward by the parties, then, we can play an argumentative challenge between two players and decide if the conclusion is provable or not depending on the different strategies performed by the players. Implementing a game model requires one more meta-interpreter able to evaluate the defeasible logic framework; indeed, according to Göedel theorem (see on page 127), we cannot evaluate the meaning of a language using the tools provided by the language itself, but we need a meta-language able to manipulate the object language8. Thus, rather than a simple meta-interpreter, we propose a Meta-level containing different Meta-evaluators. The former has been explained above, the second one is needed to perform the game model, and the last one will be used to change game execution and tree derivation strategies.
Resumo:
Uno dei principali ambiti di ricerca dell’intelligenza artificiale concerne la realizzazione di agenti (in particolare, robot) in grado di aiutare o sostituire l’uomo nell’esecuzione di determinate attività. A tal fine, è possibile procedere seguendo due diversi metodi di progettazione: la progettazione manuale e la progettazione automatica. Quest’ultima può essere preferita alla prima nei contesti in cui occorra tenere in considerazione requisiti quali flessibilità e adattamento, spesso essenziali per lo svolgimento di compiti non banali in contesti reali. La progettazione automatica prende in considerazione un modello col quale rappresentare il comportamento dell’agente e una tecnica di ricerca (oppure di apprendimento) che iterativamente modifica il modello al fine di renderlo il più adatto possibile al compito in esame. In questo lavoro, il modello utilizzato per la rappresentazione del comportamento del robot è una rete booleana (Boolean network o Kauffman network). La scelta di tale modello deriva dal fatto che possiede una semplice struttura che rende agevolmente studiabili le dinamiche tuttavia complesse che si manifestano al suo interno. Inoltre, la letteratura recente mostra che i modelli a rete, quali ad esempio le reti neuronali artificiali, si sono dimostrati efficaci nella programmazione di robot. La metodologia per l’evoluzione di tale modello riguarda l’uso di tecniche di ricerca meta-euristiche in grado di trovare buone soluzioni in tempi contenuti, nonostante i grandi spazi di ricerca. Lavori precedenti hanno gia dimostrato l’applicabilità e investigato la metodologia su un singolo robot. Lo scopo di questo lavoro è quello di fornire prova di principio relativa a un insieme di robot, aprendo nuove strade per la progettazione in swarm robotics. In questo scenario, semplici agenti autonomi, interagendo fra loro, portano all’emergere di un comportamento coordinato adempiendo a task impossibili per la singola unità. Questo lavoro fornisce utili ed interessanti opportunità anche per lo studio delle interazioni fra reti booleane. Infatti, ogni robot è controllato da una rete booleana che determina l’output in funzione della propria configurazione interna ma anche dagli input ricevuti dai robot vicini. In questo lavoro definiamo un task in cui lo swarm deve discriminare due diversi pattern sul pavimento dell’arena utilizzando solo informazioni scambiate localmente. Dopo una prima serie di esperimenti preliminari che hanno permesso di identificare i parametri e il migliore algoritmo di ricerca, abbiamo semplificato l’istanza del problema per meglio investigare i criteri che possono influire sulle prestazioni. E’ stata così identificata una particolare combinazione di informazione che, scambiata localmente fra robot, porta al miglioramento delle prestazioni. L’ipotesi è stata confermata applicando successivamente questo risultato ad un’istanza più difficile del problema. Il lavoro si conclude suggerendo nuovi strumenti per lo studio dei fenomeni emergenti in contesti in cui le reti booleane interagiscono fra loro.
Resumo:
The Gaia space mission is a major project for the European astronomical community. As challenging as it is, the processing and analysis of the huge data-flow incoming from Gaia is the subject of thorough study and preparatory work by the DPAC (Data Processing and Analysis Consortium), in charge of all aspects of the Gaia data reduction. This PhD Thesis was carried out in the framework of the DPAC, within the team based in Bologna. The task of the Bologna team is to define the calibration model and to build a grid of spectro-photometric standard stars (SPSS) suitable for the absolute flux calibration of the Gaia G-band photometry and the BP/RP spectrophotometry. Such a flux calibration can be performed by repeatedly observing each SPSS during the life-time of the Gaia mission and by comparing the observed Gaia spectra to the spectra obtained by our ground-based observations. Due to both the different observing sites involved and the huge amount of frames expected (≃100000), it is essential to maintain the maximum homogeneity in data quality, acquisition and treatment, and a particular care has to be used to test the capabilities of each telescope/instrument combination (through the “instrument familiarization plan”), to devise methods to keep under control, and eventually to correct for, the typical instrumental effects that can affect the high precision required for the Gaia SPSS grid (a few % with respect to Vega). I contributed to the ground-based survey of Gaia SPSS in many respects: with the observations, the instrument familiarization plan, the data reduction and analysis activities (both photometry and spectroscopy), and to the maintenance of the data archives. However, the field I was personally responsible for was photometry and in particular relative photometry for the production of short-term light curves. In this context I defined and tested a semi-automated pipeline which allows for the pre-reduction of imaging SPSS data and the production of aperture photometry catalogues ready to be used for further analysis. A series of semi-automated quality control criteria are included in the pipeline at various levels, from pre-reduction, to aperture photometry, to light curves production and analysis.
Resumo:
Studies in regions of the nuclear chart in which the model predictions of properties of nuclei fail can bring a better understanding of the strong interaction in the nuclear medium. To such regions belongs the so called "island of inversion" centered around Ne, Na and Mg isotopes with 20 neutrons in which unexpected ground-state spins, large deformations and dense low-energy spectra appear. This is a strong argument that the magic N = 20 is not a closed shell in this area. In this thesis investigations of isotope shifts of stable 24,25,26Mg, as well as spins and magnetic moments of short-lived 29,31Mg are presented. The successful studies were performed at the ISOLDE facility at CERN using collinear laser and beta-NMR spectroscopy techniques. The isotopes were investigated as single-charged ions in the 280-nm transition from the atomic ground state 2S1/2 to one of the two lowest excited states 2P1/2,3/2 using continuous wave laser beams. The isotope-shift measurements with fluorescence detection for the three stable isotopes show that it is feasible to perform the same studies on radioactive Mg isotopes up to the "island of inversion". This will allow to determine differences in the mean charge square radii and interpret them in terms of deformation. The high detection efficiency for beta particles and optical pumping close to saturation allowed to obtain very good beta-asymmetry signals for 29Mg and 31Mg with half-lives around 1 s and production yields about 10^5 ions/s. For this purpose the ions were implanted into a host crystal lattice. Such detection of the atomic resonances revealed their hyperfine structure, which gives the sign and a first estimate of the value of the magnetic moment. The nuclear magnetic resonance gave also their g-factors with the relative uncertainty smaller than 0.2 %. By combining the two techniques also the nuclear spin of both isotopes could be unambiguously determined. The measured spins and g-factors show that 29Mg with 17 neutrons lies outside the "island of inversion". On the other hand, 31Mg with 19 neutrons has an unexpected ground-state spin which can be explained only by promoting at least two neutrons across the N = 20 shell gap. This places the above nucleus inside the "island". However, modern shell-model approaches cannot predict this level as the ground state but only as one of the low-lying states, even though they reproduce very well the experimental g-factor. This indicates that modifications to the available interactions are required. Future measurements include isotope shift measurements on radioactive Mg isotopes and beta-NMR studies on 33Mg.
Resumo:
The radio communication system is one of the most critical system of the overall satellite platform: it often represents the only way of communication, between a spacecraft and the Ground Segment or among a constellation of satellites. This thesis focuses on specific innovative architectures for on-board and on-ground radio systems. In particular, this work is an integral part of a space program started in 2004 at the University of Bologna, Forlì campus, which led to the completion of the microsatellite ALMASat-1, successfully launched on-board the VEGA maiden flight. The success of this program led to the development of a second microsatellite, named ALMASat-EO, a three-axis stabilized microsatellite able to capture images of the Earth surface. Therefore, the first objective of this study was focused on the investigation of an innovative, efficient and low cost architecture for on-board radio communication systems. The TT&C system and the high data rate transmitter for images downlink design and realization are thoroughly described in this work, together with the development of the embedded hardware and the adopted antenna systems. Moreover, considering the increasing interest in the development of constellations of microsatellite, in particular those flying in close formations, a careful analysis has been carried out for the development of innovative communication protocols for inter-satellite links. Furthermore, in order to investigate the system aspects of space communications, a study has been carried out at ESOC having as objective the design, implementation and test of two experimental devices for the enhancement of the ESA GS. Thus, a significant portion of this thesis is dedicated to the description of the results of a method for improving the phase stability of GS radio frequency equipments by means of real-time phase compensation and a new way to perform two antennas arraying tracking using already existing ESA tracking stations facilities.