951 resultados para Specific inhalation challenge test


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Previous studies have established that mycobacterial infections ameliorate allergic inflammation. However, a non-infectious approach that controls allergic responses might represent a safer and more promising strategy. The 60-65 kDa heat shock protein (Hsp) family is endowed with anti-inflammatory properties, but it is still unclear whether and how single mycobacterial Hsp control allergic disorders. Objective Therefore, in this study we determined whether the administration of Mycobacterial leprae Hsp65 expressed by recombinant a DNA plasmid could attenuate a previously established allergic response. Methods We used an experimental model of airway allergic inflammation to test the effects of immunotherapy with DNA encoding Hsp65. Allergic mice, previously sensitized and challenged with ovalbumin, were treated with tree intramuscular doses of recombinant DNA encoding Hsp65. After treatment, mice received a second allergen challenge and the allergic response was measured. Results We found that immunotherapy attenuated eosinophilia, pulmonary inflammation, Th2 cytokine and mucus production. Moreover, we showed that the inhibition of allergic response is dependent on IL-10 production. Both Hsp65 and allergen-specific IL-10-producing cells contributed to this effect. Cells transferred from DNA-immunized mice to allergic mice migrated to allergic sites and down-modulated the Th2 response. Conclusions and Clinical Relevance Our findings clearly show that immunotherapy with DNA encoding Hsp65 can attenuate an established Th2 allergic inflammation through an IL-10-dependent mechanism; moreover, the migration of allergen-and Hsp65-specific cells to the allergic sites exerts a fundamental role. This work represents a novel contribution to the understanding of immune regulation by Hsp65 in allergic diseases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Sodium is often a limiting nutrient for terrestrial animals, and may be especially sought by herbivores. Leafcutter ants are dominant herbivores in the Neotropics, and leafcutter foraging may be affected by nutritional demands of the colony and/or the demands of their symbiotic fungal mutualists. We hypothesized that leafcutter colonies are sodium limited, and that leafcutter ants will therefore forage specifically for sodium. 2. Previous studies demonstrated that leafcutter Atta cephalotes Linnaeus workers preferentially cut and remove paper baits treated with NaCl relative to water control baits. Atta cephalotes colonies in this study were presented with baits offering NaCl, Na2SO4, and KCl to test whether leafcutters forage specifically for sodium. Sucrose and water were used as positive and negative controls, respectively. 3. Atta foragers removed significantly more of the baits treated with NaCl and Na2SO4 than the KCl treatment, which did not differ from water. The NaCl and Na2SO4 treatments were collected at similar rates. We conclude A. cephalotes forage specifically for sodium rather than for anions (chloride) or solutes in general. This study supports the hypothesis that leafcutter ants are limited by, and preferentially forage for, sodium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The importance of the lung parenchyma in the pathophysiology of asthma has previously been demonstrated. Considering that nitric oxide synthases (NOS) and arginases compete for the same substrate, it is worthwhile to elucidate the effects of complex NOS-arginase dysfunction in the pathophysiology of asthma, particularly, related to distal lung tissue. We evaluated the effects of arginase and iNOS inhibition on distal lung mechanics and oxidative stress pathway activation in a model of chronic pulmonary allergic inflammation in guinea pigs. Methods Guinea pigs were exposed to repeated ovalbumin inhalations (twice a week for 4 weeks). The animals received 1400 W (an iNOS-specific inhibitor) for 4 days beginning at the last inhalation. Afterwards, the animals were anesthetized and exsanguinated; then, a slice of the distal lung was evaluated by oscillatory mechanics, and an arginase inhibitor (nor-NOHA) or vehicle was infused in a Krebs solution bath. Tissue resistance (Rt) and elastance (Et) were assessed before and after ovalbumin challenge (0.1%), and lung strips were submitted to histopathological studies. Results Ovalbumin-exposed animals presented an increase in the maximal Rt and Et responses after antigen challenge (p<0.001), in the number of iNOS positive cells (p<0.001) and in the expression of arginase 2, 8-isoprostane and NF-kB (p<0.001) in distal lung tissue. The 1400 W administration reduced all these responses (p<0.001) in alveolar septa. Ovalbumin-exposed animals that received nor-NOHA had a reduction of Rt, Et after antigen challenge, iNOS positive cells and 8-isoprostane and NF-kB (p<0.001) in lung tissue. The activity of arginase 2 was reduced only in the groups treated with nor-NOHA (p <0.05). There was a reduction of 8-isoprostane expression in OVA-NOR-W compared to OVA-NOR (p<0.001). Conclusions In this experimental model, increased arginase content and iNOS-positive cells were associated with the constriction of distal lung parenchyma. This functional alteration may be due to a high expression of 8-isoprostane, which had a procontractile effect. The mechanism involved in this response is likely related to the modulation of NF-kB expression, which contributed to the activation of the arginase and iNOS pathways. The association of both inhibitors potentiated the reduction of 8-isoprostane expression in this animal model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Several studies had demonstrated the involvement of the dorsolateral portion of periaqueductal grey matter (dlPAG) in defensive responses. This region contains a significant number of neurons containing the enzyme nitric oxide synthase (NOS) and previous studies showed that non-selective NOS inhibition or glutamate NMDA-receptor antagonism in the dlPAG caused anxiolytic-like effects in the elevated plus maze. Methods In the present study we verified if the NMDA/NO pathway in the dlPAG would also involve in the behavioral suppression observed in rats submitted to the Vogel conflict test. In addition, the involvement of this pathway was investigated by using a selective nNOS inhibitor, Nω-propyl-L-arginine (N-Propyl, 0.08 nmol/200 nL), a NO scavenger, carboxy-PTIO (c-PTIO, 2 nmol/200 nL) and a specific NMDA receptor antagonist, LY235959 (4 nmol/200 nL). Results Intra-dlPAG microinjection of these drugs increased the number of punished licks without changing the number of unpunished licks or nociceptive threshold, as measure by the tail flick test. Conclusion The results indicate that activation of NMDA receptors and increased production of NO in the dlPAG are involved in the anxiety behavior displayed by rats in the VCT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to evaluate the push-out bond strength of fiberglass resin reinforced bonded with five ionomer cements. Also, the interface between cement and dentin was inspected by means of SEM. Fifty human canines were chose after rigorous scrutiny process, endodontically treated and divided randomly into five groups (n = 3) according to cement tested: Group I – Ionoseal (VOCO), Group II – Fugi I (GC), Group III – Fugi II Improved (GC), Group IV – Rely X Luting 2 (3M ESPE), Group V – Ketac Cem (3M ESPE). The post-space was prepared to receive a fiberglass post, which was tried before cementation process. No dentin or post surface pretreatment was carried out. After post bonding, all roots were cross-sectioned to acquire 3 thin-slices (1 mm) from three specific regions of tooth (cervical, medium and apical). A Universal test machine was used to carry out the push-out test with cross-head speed set to 0.5mm/mim. All failed specimens were observed under optical microscope to identify the failure mode. Representative specimens from each group was inspected under SEM. The data were analyzed by Kolmogorov-Smirnov and Levene’s tests and by two-way ANOVA, and Tukey’s port hoc test at a significance level of 5%. It was compared the images obtained for determination of types of failures more occurred in different levels. SEM inspection displayed that all cements filled the space between post and dentin, however, some imperfections such bubles and voids were noticed in all groups in some degree of extension. The push-out bond strength showed that cement Ketac Cem presented significant higher results when compared to the Ionoseal (P = 0.02). There were no statistical significant differences among other cements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Obstructive sleep apnea (OSA) is a respiratory disease characterized by the collapse of the extrathoracic airway and has important social implications related to accidents and cardiovascular risk. The main objective of the present study was to investigate whether the drop in expiratory flow and the volume expired in 0.2 s during the application of negative expiratory pressure (NEP) are associated with the presence and severity of OSA in a population of professional interstate bus drivers who travel medium and long distances. Methods/Design An observational, analytic study will be carried out involving adult male subjects of an interstate bus company. Those who agree to participate will undergo a detailed patient history, physical examination involving determination of blood pressure, anthropometric data, circumference measurements (hips, waist and neck), tonsils and Mallampati index. Moreover, specific questionnaires addressing sleep apnea and excessive daytime sleepiness will be administered. Data acquisition will be completely anonymous. Following the medical examination, the participants will perform a spirometry, NEP test and standard overnight polysomnography. The NEP test is performed through the administration of negative pressure at the mouth during expiration. This is a practical test performed while awake and requires little cooperation from the subject. In the absence of expiratory flow limitation, the increase in the pressure gradient between the alveoli and open upper airway caused by NEP results in an increase in expiratory flow. Discussion Despite the abundance of scientific evidence, OSA is still underdiagnosed in the general population. In addition, diagnostic procedures are expensive, and predictive criteria are still unsatisfactory. Because increased upper airway collapsibility is one of the main determinants of OSA, the response to the application of NEP could be a predictor of this disorder. With the enrollment of this study protocol, the expectation is to encounter predictive NEP values for different degrees of OSA in order to contribute toward an early diagnosis of this condition and reduce its impact and complications among commercial interstate bus drivers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prenatal immune challenge (PIC) in pregnant rodents produces offspring with abnormalities in behavior, histology, and gene expression that are reminiscent of schizophrenia and autism. Based on this, the goal of this article was to review the main contributions of PIC models, especially the one using the viral-mimetic particle polyriboinosinic-polyribocytidylic acid (poly-I:C), to the understanding of the etiology, biological basis and treatment of schizophrenia. This systematic review consisted of a search of available web databases (PubMed, SciELO, LILACS, PsycINFO, and ISI Web of Knowledge) for original studies published in the last 10 years (May 2001 to October 2011) concerning animal models of PIC, focusing on those using poly-I:C. The results showed that the PIC model with poly-I:C is able to mimic the prodrome and both the positive and negative/cognitive dimensions of schizophrenia, depending on the specific gestation time window of the immune challenge. The model resembles the neurobiology and etiology of schizophrenia and has good predictive value. In conclusion, this model is a robust tool for the identification of novel molecular targets during prenatal life, adolescence and adulthood that might contribute to the development of preventive and/or treatment strategies (targeting specific symptoms, i.e., positive or negative/cognitive) for this devastating mental disorder, also presenting biosafety as compared to viral infection models. One limitation of this model is the incapacity to model the full spectrum of immune responses normally induced by viral exposure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzed the weight loss and surface roughness caused in Plexiglass specimens by conventional dentifrices (Sorriso, Colgate and Close Up) and specific dentifrices used for cleaning of dentures (Corega and Dentu Creme). Plexiglass specimens were divided into 6 groups (n=6) including: a control (distilled water - DW) and experimental groups. Brushing was performed in a toothbrushing machine with a soft brush and a dentifrice suspension and DW according to different brushing times (50, 100, 200 and 250 min -18,000, 36,000, 72,000 and 90,000 cycles, respectively, calculated to correspond to 1, 2, 4 and 5 years of regular brushing). The results of weight loss and surface roughness were analyzed by ANOVA and Tukey’s test at 5% significance level. In all tested times, the effect of DW was insignificant. Dentifrices differed significantly from DW in the initial period. Corega dentifrice caused greater mass loss in all studied times, followed by Close Up. Dentifrices resulted in a surface roughness similar to the DW at 50 min. In the other times, Sorriso, Colgate and Corega caused more surface roughness than DW. In conclusion, specific dentifrices caused larger mass loss and lower surface roughness as conventional dentifrice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate, histometrically, the bone healing of the molar extraction socket just after cigarette smoke inhalation (CSI). Forty male Wistar rats were randomly assigned to a test group (animals exposed to CSI, starting 3 days before teeth extraction and maintained until sacrifice; n=20) and a control group (animals never exposed to CSI; n=20). Second mandibular molars were bilaterally extracted and the animals (n=5/group/period) were sacrificed at 3, 7, 10 and 14 days after surgery. Digital images were analyzed according to the following histometric parameters: osteoid tissue (OT), remaining area (RA), mineralized tissue (MT) and non-mineralized tissue (NMT) in the molar socket. Intergroup analysis showed no significant differences at day 3 (p>0.05) for all parameters. On the 7th day, CSI affected negatively (p<0.05) bone formation with respect to NMT and RA (MT: 36%, NMT: 53%, RA: 12%; and MT: 39%, NMT: 29%, RA: 32%, for the control and test groups, respectively). In contrast, no statistically significant differences (p>0.05) were found at days 10 and 14. It may be concluded that CSI may affect socket healing from the early events involved in the healing process, which may be critical for the amount and quality of new-bone formation in smokers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study evaluated by an in vitro model the effect of beverages on dental enamel previously subjected to erosive challenge with hydrochloric acid. The factor under study was the type of beverage, in five levels: Sprite® Zero Low-calorie Soda Lime (positive control), Parmalat® ultra high temperature (UHT) milk, Ades® Original soymilk, Leão® Ice Tea Zero ready-to-drink low-calorie peach-flavored black teaand Prata® natural mineral water (negative control). Seventy-five bovine enamel specimens were distributed among the five types of beverages (n=15), according to a randomized complete block design. For the formation of erosive wear lesions, the specimens were immersed in 10 mL aqueous solution of hydrochloric acid 0.01 M for 2 min. Subsequently, the specimens were immersed in 20 mL of the beverages for 1 min, twice daily for 2 days at room temperature. In between, the specimens were kept in 20 mL of artificial saliva at 37ºC. The response variable was the quantitative enamel microhardness. ANOVA and Tukey's test showed highly significant differences (p<0.00001) in the enamel exposed to hydrochloric acid and beverages. The soft drink caused a significantly higher decrease in microhardness compared with the other beverages. The black tea caused a significantly higher reduction in microhardness than the mineral water, UHT milk and soymilk, but lower than the soft drink. Among the analyzed beverages, the soft drink and the black tea caused the most deleterious effects on dental enamel microhardness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The reduction of friction and wear in systems presenting metal-to-metal contacts, as in several mechanical components, represents a traditional challenge in tribology. In this context, this work presents a computational study based on the linear Archard's wear law and finite element modeling (FEM), in order to analyze unlubricated sliding wear observed in typical pin on disc tests. Such modeling was developed using finite element software Abaqus® with 3-D deformable geometries and elastic–plastic material behavior for the contact surfaces. Archard's wear model was implemented into a FORTRAN user subroutine (UMESHMOTION) in order to describe sliding wear. Modeling of debris and oxide formation mechanisms was taken into account by the use of a global wear coefficient obtained from experimental measurements. Such implementation considers an incremental computation for surface wear based on the nodal displacements by means of adaptive mesh tools that rearrange local nodal positions. In this way, the worn track was obtained and new surface profile is integrated for mass loss assessments. This work also presents experimental pin on disc tests with AISI 4140 pins on rotating AISI H13 discs with normal loads of 10, 35, 70 and 140 N, which represent, respectively, mild, transition and severe wear regimes, at sliding speed of 0.1 m/s. Numerical and experimental results were compared in terms of wear rate and friction coefficient. Furthermore, in the numerical simulation the stress field distribution and changes in the surface profile across the worn track of the disc were analyzed. The applied numerical formulation has shown to be more appropriate to predict mild wear regime than severe regime, especially due to the shorter running-in period observed in lower loads that characterizes this kind of regime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiac morphogenesis is a complex process governed by evolutionarily conserved transcription factors and signaling molecules. The Drosophila cardiac tube is linear, made of 52 pairs of cardiomyocytes (CMs), which express specific transcription factor genes that have human homologues implicated in Congenital Heart Diseases (CHDs) (NKX2-5, GATA4 and TBX5). The Drosophila cardiac tube is linear and composed of a rostral portion named aorta and a caudal one called heart, distinguished by morphological and functional differences controlled by Hox genes, key regulators of axial patterning. Overexpression and inactivation of the Hox gene abdominal-A (abd-A), which is expressed exclusively in the heart, revealed that abd-A controls heart identity. The aim of our work is to isolate the heart-specific cisregulatory sequences of abd-A direct target genes, the realizator genes granting heart identity. In each segment of the heart, four pairs of cardiomyocytes (CMs) express tinman (tin), homologous to NKX2-5, and acquire strong contractile and automatic rhythmic activities. By tyramide amplified FISH, we found that seven genes, encoding ion channels, pumps or transporters, are specifically expressed in the Tin-CMs of the heart. We initially used online available tools to identify their heart-specific cisregutatory modules by looking for Conserved Non-coding Sequences containing clusters of binding sites for various cardiac transcription factors, including Hox proteins. Based on these data we generated several reporter gene constructs and transgenic embryos, but none of them showed reporter gene expression in the heart. In order to identify additional abd-A target genes, we performed microarray experiments comparing the transcriptomes of aorta versus heart and identified 144 genes overexpressed in the heart. In order to find the heart-specific cis-regulatory regions of these target genes we developed a new bioinformatic approach where prediction is based on pattern matching and ordered statistics. We first retrieved Conserved Noncoding Sequences from the alignment between the D.melanogaster and D.pseudobscura genomes. We scored for combinations of conserved occurrences of ABD-A, ABD-B, TIN, PNR, dMEF2, MADS box, T-box and E-box sites and we ranked these results based on two independent strategies. On one hand we ranked the putative cis-regulatory sequences according to best scored ABD-A biding sites, on the other hand we scored according to conservation of binding sites. We integrated and ranked again the two lists obtained independently to produce a final rank. We generated nGFP reporter construct flies for in vivo validation. We identified three 1kblong heart-specific enhancers. By in vivo and in vitro experiments we are determining whether they are direct abd-A targets, demonstrating the role of a Hox gene in the realization of heart identity. The identified abd-A direct target genes may be targets also of the NKX2-5, GATA4 and/or TBX5 homologues tin, pannier and Doc genes, respectively. The identification of sequences coregulated by a Hox protein and the homologues of transcription factors causing CHDs, will provide a mean to test whether these factors function as Hox cofactors granting cardiac specificity to Hox proteins, increasing our knowledge on the molecular mechanisms underlying CHDs. Finally, it may be investigated whether these Hox targets are involved in CHDs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context-aware computing is currently considered the most promising approach to overcome information overload and to speed up access to relevant information and services. Context-awareness may be derived from many sources, including user profile and preferences, network information, sensor analysis; usually context-awareness relies on the ability of computing devices to interact with the physical world, i.e. with the natural and artificial objects hosted within the "environment”. Ideally, context-aware applications should not be intrusive and should be able to react according to user’s context, with minimum user effort. Context is an application dependent multidimensional space and the location is an important part of it since the very beginning. Location can be used to guide applications, in providing information or functions that are most appropriate for a specific position. Hence location systems play a crucial role. There are several technologies and systems for computing location to a vary degree of accuracy and tailored for specific space model, i.e. indoors or outdoors, structured spaces or unstructured spaces. The research challenge faced by this thesis is related to pedestrian positioning in heterogeneous environments. Particularly, the focus will be on pedestrian identification, localization, orientation and activity recognition. This research was mainly carried out within the “mobile and ambient systems” workgroup of EPOCH, a 6FP NoE on the application of ICT to Cultural Heritage. Therefore applications in Cultural Heritage sites were the main target of the context-aware services discussed. Cultural Heritage sites are considered significant test-beds in Context-aware computing for many reasons. For example building a smart environment in museums or in protected sites is a challenging task, because localization and tracking are usually based on technologies that are difficult to hide or harmonize within the environment. Therefore it is expected that the experience made with this research may be useful also in domains other than Cultural Heritage. This work presents three different approaches to the pedestrian identification, positioning and tracking: Pedestrian navigation by means of a wearable inertial sensing platform assisted by the vision based tracking system for initial settings an real-time calibration; Pedestrian navigation by means of a wearable inertial sensing platform augmented with GPS measurements; Pedestrian identification and tracking, combining the vision based tracking system with WiFi localization. The proposed localization systems have been mainly used to enhance Cultural Heritage applications in providing information and services depending on the user’s actual context, in particular depending on the user’s location.