957 resultados para Target Zone Model


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The economic occupation of an area of 500 ha for Piracicaba was studied with the irrigated cultures of maize, tomato, sugarcane and beans, having used models of deterministic linear programming and linear programming including risk for the Target-Motad model, where two situations had been analyzed. In the deterministic model the area was the restrictive factor and the water was not restrictive for none of the tested situations. For the first situation the gotten maximum income was of R$ 1,883,372.87 and for the second situation it was of R$ 1,821,772.40. In the model including risk a producer that accepts risk can in the first situation get the maximum income of R$ 1,883,372. 87 with a minimum risk of R$ 350 year(-1), and in the second situation R$ 1,821,772.40 with a minimum risk of R$ 40 year(-1). Already a producer averse to the risk can get in the first situation a maximum income of R$ 1,775,974.81 with null risk and for the second situation R$ 1.707.706, 26 with null risk, both without water restriction. These results stand out the importance of the inclusion of the risk in supplying alternative occupations to the producer, allowing to a producer taking of decision considered the risk aversion and the pretension of income.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Catalytic conversion of N2O to N-2 With potassium catalysts supported on activated carbon (K/AC) was investigated. Potassium proves to be much more active and stable than either copper or cobalt because potassium possesses strong abilities both for N2O chemisorption and oxygen transfer. Potassium redispersion is found to play a critical role in influencing the catalyst stability. A detailed study of the reaction mechanism was conducted based upon three different catalyst loadings. It was found that during temperature-programmed reaction (TPR), the negative oxygen balance at low temperatures (< 50 degrees C) is due to the oxidation of the external surface of potassium oxide particles, while the bulk oxidation accounts for the oxygen accumulation at higher temperatures (below ca. 270 degrees C). N2O is beneficial for the removal of carbon-oxygen complexes because of the formation of CO2 instead of CO and because of its role in making the chemisorption of produced CO2 on potassium oxide particles less stable. A conceptual three-zone model was proposed to clarify the reaction mechanism over K/AC catalysts. CO2 chemisorption at 250 degrees C proves to be an effective measurement of potassium dispersion. (C) 1999 Academic Press.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Blast fragmentation can have a significant impact on the profitability of a mine. An optimum run of mine (ROM) size distribution is required to maximise the performance of downstream processes. If this fragmentation size distribution can be modelled and controlled, the operation will have made a significant advancement towards improving its performance. Blast fragmentation modelling is an important step in Mine to Mill™ optimisation. It allows the estimation of blast fragmentation distributions for a number of different rock mass, blast geometry, and explosive parameters. These distributions can then be modelled in downstream mining and milling processes to determine the optimum blast design. When a blast hole is detonated rock breakage occurs in two different stress regions - compressive and tensile. In the-first region, compressive stress waves form a 'crushed zone' directly adjacent to the blast hole. The second region, termed the 'cracked zone', occurs outside the crush one. The widely used Kuz-Ram model does not recognise these two blast regions. In the Kuz-Ram model the mean fragment size from the blast is approximated and is then used to estimate the remaining size distribution. Experience has shown that this model predicts the coarse end reasonably accurately, but it can significantly underestimate the amount of fines generated. As part of the Australian Mineral Industries Research Association (AMIRA) P483A Mine to Mill™ project, the Two-Component Model (TCM) and Crush Zone Model (CZM), developed by the Julius Kruttschnitt Mineral Research Centre (JKMRC), were compared and evaluated to measured ROM fragmentation distributions. An important criteria for this comparison was the variation of model results from measured ROM in the-fine to intermediate section (1-100 mm) of the fragmentation curve. This region of the distribution is important for Mine to Mill™ optimisation. The comparison of modelled and Split ROM fragmentation distributions has been conducted in harder ores (UCS greater than 80 MPa). Further work involves modelling softer ores. The comparisons will be continued with future site surveys to increase confidence in the comparison of the CZM and TCM to Split results. Stochastic fragmentation modelling will then be conducted to take into account variation of input parameters. A window of possible fragmentation distributions can be compared to those obtained by Split . Following this work, an improved fragmentation model will be developed in response to these findings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adhesive bonding as a joining or repair method has a wide application in many industries. Repairs with bonded patches are often carried out to re-establish the stiffness at critical regions or spots of corrosion and/or fatigue cracks. Single and double-strap repairs (SS and DS, respectively) are a viable option for repairing. For the SS repairs, a patch is adhesively-bonded on one of the structure faces. SS repairs are easy to execute, but the load eccentricity leads to peel peak stresses at the overlap edges. DS repairs involve the use of two patches, one on each face of the structure. These are more efficient than SS repairs, due to the doubling of the bonding area and suppression of the transverse deflection of the adherends. Shear stresses also become more uniform as a result of smaller differential straining. The experimental and Finite Element (FE) study presented here for strength prediction and design optimization of bonded repairs includes SS and DS solutions with different values of overlap length (LO). The examined values of LO include 10, 20 and 30 mm. The failure strengths of the SS and DS repairs were compared with FE results by using the Abaqus® FE software. A Cohesive Zone Model (CZM) with a triangular shape in pure tensile and shear modes, including the mixed-mode possibility for crack growth, was used to simulate fracture of the adhesive layer. A good agreement was found between the experiments and the FE simulations on the failure modes, elastic stiffness and strength of the repairs, showing the effectiveness and applicability of the proposed FE technique in predicting strength of bonded repairs. Furthermore, some optimization principles were proposed to repair structures with adhesively-bonded patches that will allow repair designers to effectively design bonded repairs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work reports on an experimental and finite element method (FEM) parametric study of adhesively-bonded single and double-strap repairs on carbon-epoxy structures under buckling unrestrained compression. The influence of the overlap length and patch thickness was evaluated. This loading gains a particular significance from the additional characteristic mechanisms of structures under compression, such as fibres microbuckling, for buckling restrained structures, or global buckling of the assembly, if no transverse restriction exists. The FEM analysis is based on the use of cohesive elements including mixed-mode criteria to simulate a cohesive fracture of the adhesive layer. Trapezoidal laws in pure modes I and II were used to account for the ductility of most structural adhesives. These laws were estimated for the adhesive used from double cantilever beam (DCB) and end-notched flexure (ENF) tests, respectively, using an inverse technique. The pure mode III cohesive law was equalled to the pure mode II one. Compression failure in the laminates was predicted using a stress-based criterion. The accurate FEM predictions open a good prospect for the reduction of the extensive experimentation in the design of carbon-epoxy repairs. Design principles were also established for these repairs under buckling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An experimental and numerical investigation into the shear strength behaviour of adhesive single lap joints (SLJs) was carried out in order to understand the effect of temperature on the joint strength. The adherend material used for the experimental tests was an aluminium alloy in the form of thin sheets, and the adhesive used was a high-strength high temperature epoxy. Tensile tests as a function of temperature were performed and numerical predictions based on the use of a bilinear cohesive damage model were obtained. It is shown that at temperatures below Tg, the lap shear strength of SLJs increased, while at temperatures above Tg, a drastic drop in the lap shear strength was observed. Comparison between the experimental and numerical maximum loads representing the strength of the joints shows a reasonably good agreement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As ligações adesivas têm sido cada vez mais utilizadas nos últimos anos em detrimento de outros métodos tais como a soldadura, ligações aparafusadas e ligações rebitadas. Os plásticos de Engenharia têm um papel cada vez mais preponderante na indústria, devido às suas excelentes propriedades. Neste trabalho foram considerados três polímeros diferentes, o Policloreto de Vinilo (PVC) e o Polipropileno (PP) dado o seu baixo custo e peso e a superfície quimicamente inerte e o Politetrafluoretileno (PTFE) devido às suas boas propriedades químicas e excelentes propriedades de deslizamento. No entanto, estes materiais possuem uma baixa energia de superfície e, por isso, são muito difíceis de colar com mais relevância para o PTFE. Assim, após um estudo preliminar foi escolhido, para realizar as colagens necessárias, um adesivo da Tamarron Technology “Tam Tech Adhesive”, próprio para este tipo de substratos difíceis de colar. Posteriormente foi efetuada a sua caraterização através de ensaios de provetes maciços à tração. O principal objetivo deste trabalho foi estudar juntas de sobreposição simples de materiais poliméricos difíceis de colar tais como o PTFE, PP e PVC com recurso a um adesivo que não necessitasse de preparação de superfície. Foram fabricadas juntas de sobreposição simples (JSS) segundo os métodos Lap Shear (LS) e Block Shear (BS) dos três materiais referidos anteriormente e realizados os respetivos ensaios para avaliar o comportamento mecânico das ligações adesivas. Os materiais utilizados como substratos foram também submetidos a ensaios de tração com a finalidade de obter o módulo de elasticidade e as suas propriedades de resistência. Os substratos envolvidos nas juntas adesivas não sofreram qualquer preparação especial das superfícies. Na maioria dos casos consistiu apenas numa limpeza das superfícies com álcool etílico. Contudo, para o PTFE também se experimentou a preparação por abrasão com lixa e por chama. Foi também efetuado um trabalho de simulação numérica por elementos finitos utilizando um modelo de dano coesivo triangular. As resistências ao corte obtidas são superiores em BS comparativamente a LS, exceção feita aos substratos de PTFE aonde os resultados são similares. O tratamento por chama melhorou a resistência mecânica das juntas. Verificou-se também que o modelo numérico simulou adequadamente o comportamento das juntas principalmente das LS.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of adhesive joints has increased in recent decades due to its competitive features compared with traditional methods. This work aims to estimate the tensile critical strain energy release rate (GIC) of adhesive joints by the Double-Cantilever Beam (DCB) test. The J-integral is used since it enables obtaining the tensile Cohesive Zone Model (CZM) law. An optical measuring method was developed for assessing the crack tip opening (δn) and adherends rotation (θo). The proposed CZM laws were best approximated by a triangular shape for the brittle adhesive and a trapezoidal shape for the two ductile adhesives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O uso de ligações adesivas aumentou significativamente nos últimos anos e é hoje em dia uma técnica de ligação dominante na indústria aeronáutica e automóvel. As ligações adesivas visam substituir os métodos tradicionais de fixação mecânicos na união de estruturas. A melhoria ao longo dos anos de vários modelos de previsão de dano, nomeadamente através do Método de Elementos Finitos (MEF), tem ajudado ao desenvolvimento desta técnica de ligação. Os Modelos de Dano coesivo (MDC), usados em conjunto com MEF, são uma ferramenta viável para a previsão de resistência de juntas adesivas. Os MDC combinam critérios da resistência dos materiais para a iniciação do dano e conceitos da mecânica da fratura para a propagação da fenda. Existem diversas formas de leis coesivas possíveis de aplicar em simulações por MDC, em função do comportamento expectável dos materiais que estão a ser simulados. Neste trabalho, estudou-se numericamente o efeito de diversas formas de leis coesivas na previsão no comportamento de juntas adesivas, nomeadamente nas curvas forçadeslocamento (P-) de ensaios Double-Cantilever Beam para caracterização à tração e ensaios End-Notched Flexure para caraterização ao corte. Também se estudou a influência dos parâmetros coesivos à tração e corte nas curvas P- dos referidos ensaios. Para o Araldite®AV138 à tração e ao corte, a lei triangular é a que melhor prevê o comportamento do adesivo. Para a previsão da resistência de ambos os adesivos Araldite® 2015 e SikaForce® 7752, a lei trapezoidal é a que melhor se adequa, confirmando assim que esta lei é a que melhor caracteriza o comportamento de dano de adesivos tipicamente dúcteis. O estudo dos parâmetros revelou influência distinta na previsão do comportamento das juntas, embora com bastantes semelhanças entre os diferentes tipos de adesivos.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As juntas adesivas têm vindo a ser usadas em diversas áreas e contam com inúmeras aplicações práticas. Devido ao fácil e rápido fabrico, as juntas de sobreposição simples (JSS) são um tipo de configuração bastante comum. O aumento da resistência, a redução de peso e a resistência à corrosão são algumas das vantagens que este tipo de junta oferece relativamente aos processos de ligação tradicionais. Contudo, a concentração de tensões nas extremidades do comprimento da ligação é uma das principais desvantagens. Existem poucas técnicas de dimensionamento precisas para a diversidade de ligações que podem ser encontradas em situações reais, o que constitui um obstáculo à utilização de juntas adesivas em aplicações estruturais. O presente trabalho visa comparar diferentes métodos analíticos e numéricos na previsão da resistência de JSS com diferentes comprimentos de sobreposição (LO). O objectivo fundamental é avaliar qual o melhor método para prever a resistência das JSS. Foram produzidas juntas adesivas entre substratos de alumínio utilizando um adesivo époxido frágil (Araldite® AV138), um adesivo epóxido moderadamente dúctil (Araldite® 2015), e um adesivo poliuretano dúctil (SikaForce® 7888). Consideraram-se diferentes métodos analíticos e dois métodos numéricos: os Modelos de Dano Coesivo (MDC) e o Método de Elementos Finitos Extendido (MEFE), permitindo a análise comparativa. O estudo possibilitou uma percepção crítica das capacidades de cada método consoante as características do adesivo utilizado. Os métodos analíticos funcionam apenas relativamente bem em condições muito específicas. A análise por MDC com lei triangular revelou ser um método bastante preciso, com excepção de adesivos que sejam bastante dúcteis. Por outro lado, a análise por MEFE demonstrou ser uma técnica pouco adequada, especialmente para o crescimento de dano em modo misto.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Résumé: L'évaluation de l'exposition aux nuisances professionnelles représente une étape importante dans l'analyse de poste de travail. Les mesures directes sont rarement utilisées sur les lieux même du travail et l'exposition est souvent estimée sur base de jugements d'experts. Il y a donc un besoin important de développer des outils simples et transparents, qui puissent aider les spécialistes en hygiène industrielle dans leur prise de décision quant aux niveaux d'exposition. L'objectif de cette recherche est de développer et d'améliorer les outils de modélisation destinés à prévoir l'exposition. Dans un premier temps, une enquête a été entreprise en Suisse parmi les hygiénistes du travail afin d'identifier les besoins (types des résultats, de modèles et de paramètres observables potentiels). Il a été constaté que les modèles d'exposition ne sont guère employés dans la pratique en Suisse, l'exposition étant principalement estimée sur la base de l'expérience de l'expert. De plus, l'émissions de polluants ainsi que leur dispersion autour de la source ont été considérés comme des paramètres fondamentaux. Pour tester la flexibilité et la précision des modèles d'exposition classiques, des expériences de modélisations ont été effectuées dans des situations concrètes. En particulier, des modèles prédictifs ont été utilisés pour évaluer l'exposition professionnelle au monoxyde de carbone et la comparer aux niveaux d'exposition répertoriés dans la littérature pour des situations similaires. De même, l'exposition aux sprays imperméabilisants a été appréciée dans le contexte d'une étude épidémiologique sur une cohorte suisse. Dans ce cas, certains expériences ont été entreprises pour caractériser le taux de d'émission des sprays imperméabilisants. Ensuite un modèle classique à deux-zone a été employé pour évaluer la dispersion d'aérosol dans le champ proche et lointain pendant l'activité de sprayage. D'autres expériences ont également été effectuées pour acquérir une meilleure compréhension des processus d'émission et de dispersion d'un traceur, en se concentrant sur la caractérisation de l'exposition du champ proche. Un design expérimental a été développé pour effectuer des mesures simultanées dans plusieurs points d'une cabine d'exposition, par des instruments à lecture directe. Il a été constaté que d'un point de vue statistique, la théorie basée sur les compartiments est sensée, bien que l'attribution à un compartiment donné ne pourrait pas se faire sur la base des simples considérations géométriques. Dans une étape suivante, des données expérimentales ont été collectées sur la base des observations faites dans environ 100 lieux de travail différents: des informations sur les déterminants observés ont été associées aux mesures d'exposition des informations sur les déterminants observés ont été associé. Ces différentes données ont été employées pour améliorer le modèle d'exposition à deux zones. Un outil a donc été développé pour inclure des déterminants spécifiques dans le choix du compartiment, renforçant ainsi la fiabilité des prévisions. Toutes ces investigations ont servi à améliorer notre compréhension des outils des modélisations ainsi que leurs limitations. L'intégration de déterminants mieux adaptés aux besoins des experts devrait les inciter à employer cet outil dans leur pratique. D'ailleurs, en augmentant la qualité des outils des modélisations, cette recherche permettra non seulement d'encourager leur utilisation systématique, mais elle pourra également améliorer l'évaluation de l'exposition basée sur les jugements d'experts et, par conséquent, la protection de la santé des travailleurs. Abstract Occupational exposure assessment is an important stage in the management of chemical exposures. Few direct measurements are carried out in workplaces, and exposures are often estimated based on expert judgements. There is therefore a major requirement for simple transparent tools to help occupational health specialists to define exposure levels. The aim of the present research is to develop and improve modelling tools in order to predict exposure levels. In a first step a survey was made among professionals to define their expectations about modelling tools (what types of results, models and potential observable parameters). It was found that models are rarely used in Switzerland and that exposures are mainly estimated from past experiences of the expert. Moreover chemical emissions and their dispersion near the source have also been considered as key parameters. Experimental and modelling studies were also performed in some specific cases in order to test the flexibility and drawbacks of existing tools. In particular, models were applied to assess professional exposure to CO for different situations and compared with the exposure levels found in the literature for similar situations. Further, exposure to waterproofing sprays was studied as part of an epidemiological study on a Swiss cohort. In this case, some laboratory investigation have been undertaken to characterize the waterproofing overspray emission rate. A classical two-zone model was used to assess the aerosol dispersion in the near and far field during spraying. Experiments were also carried out to better understand the processes of emission and dispersion for tracer compounds, focusing on the characterization of near field exposure. An experimental set-up has been developed to perform simultaneous measurements through direct reading instruments in several points. It was mainly found that from a statistical point of view, the compartmental theory makes sense but the attribution to a given compartment could ñó~be done by simple geometric consideration. In a further step the experimental data were completed by observations made in about 100 different workplaces, including exposure measurements and observation of predefined determinants. The various data obtained have been used to improve an existing twocompartment exposure model. A tool was developed to include specific determinants in the choice of the compartment, thus largely improving the reliability of the predictions. All these investigations helped improving our understanding of modelling tools and identify their limitations. The integration of more accessible determinants, which are in accordance with experts needs, may indeed enhance model application for field practice. Moreover, while increasing the quality of modelling tool, this research will not only encourage their systematic use, but might also improve the conditions in which the expert judgments take place, and therefore the workers `health protection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Waterproofing agents are widely used to protect leather and textiles in both domestic and occupational activities. An outbreak of acute respiratory syndrome following exposure to waterproofing sprays occurred during the winter 2002-2003 in Switzerland. About 180 cases were reported by the Swiss Toxicological Information Centre between October 2002 and March 2003, whereas fewer than 10 cases per year had been recorded previously. The reported cases involved three brands of sprays containing a common waterproofing mixture, that had undergone a formulation change in the months preceding the outbreak. A retrospective analysis was undertaken in collaboration with the Swiss Toxicological Information Centre and the Swiss Registries for Interstitial and Orphan Lung Diseases to clarify the circumstances and possible causes of the observed health effects. Individual exposure data were generated with questionnaires and experimental emission measurements. The collected data was used to conduct numeric simulation for 102 cases of exposure. A classical two-zone model was used to assess the aerosol dispersion in the near- and far-field during spraying. The resulting assessed dose and exposure levels obtained were spread on large scales, of several orders of magnitude. No dose-response relationship was found between exposure indicators and health effects indicators (perceived severity and clinical indicators). Weak relationships were found between unspecific inflammatory response indicators (leukocytes, C-reactive protein) and the maximal exposure concentration. The results obtained disclose a high interindividual response variability and suggest that some indirect mechanism(s) predominates in the respiratory disease occurrence. Furthermore, no threshold could be found to define a safe level of exposure. These findings suggest that the improvement of environmental exposure conditions during spraying alone does not constitute a sufficient measure to prevent future outbreaks of waterproofing spray toxicity. More efficient preventive measures are needed prior to the marketing and distribution of new waterproofing agents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulated-annealing-based conditional simulations provide a flexible means of quantitatively integrating diverse types of subsurface data. Although such techniques are being increasingly used in hydrocarbon reservoir characterization studies, their potential in environmental, engineering and hydrological investigations is still largely unexploited. Here, we introduce a novel simulated annealing (SA) algorithm geared towards the integration of high-resolution geophysical and hydrological data which, compared to more conventional approaches, provides significant advancements in the way that large-scale structural information in the geophysical data is accounted for. Model perturbations in the annealing procedure are made by drawing from a probability distribution for the target parameter conditioned to the geophysical data. This is the only place where geophysical information is utilized in our algorithm, which is in marked contrast to other approaches where model perturbations are made through the swapping of values in the simulation grid and agreement with soft data is enforced through a correlation coefficient constraint. Another major feature of our algorithm is the way in which available geostatistical information is utilized. Instead of constraining realizations to match a parametric target covariance model over a wide range of spatial lags, we constrain the realizations only at smaller lags where the available geophysical data cannot provide enough information. Thus we allow the larger-scale subsurface features resolved by the geophysical data to have much more due control on the output realizations. Further, since the only component of the SA objective function required in our approach is a covariance constraint at small lags, our method has improved convergence and computational efficiency over more traditional methods. Here, we present the results of applying our algorithm to the integration of porosity log and tomographic crosshole georadar data to generate stochastic realizations of the local-scale porosity structure. Our procedure is first tested on a synthetic data set, and then applied to data collected at the Boise Hydrogeophysical Research Site.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, massive protostars have turned out to be a possible population of high-energy emitters. Among the best candidates is IRAS 16547-4247, a protostar that presents a powerful outflow with clear signatures of interaction with its environment. This source has been revealed to be a potential high-energy source because it displays non-thermal radio emission of synchrotron origin, which is evidence of relativistic particles. To improve our understanding of IRAS 16547-4247 as a high-energy source, we analyzed XMM-Newton archival data and found that IRAS 16547-4247 is a hard X-ray source. We discuss these results in the context of a refined one-zone model and previous radio observations. From our study we find that it may be difficult to explain the X-ray emission as non-thermal radiation coming from the interaction region, but it might be produced by thermal Bremsstrahlung (plus photo-electric absorption) by a fast shock at the jet end. In the high-energy range, the source might be detectable by the present generation of Cherenkov telescopes, and may eventually be detected by Fermi in the GeV range.