886 resultados para Recontextualised found object
Resumo:
Wide-range spectral coverage of blazar-type active galactic nuclei is of paramount importance for understanding the particle acceleration mechanisms assumed to take place in their jets. The Major Atmospheric Gamma Imaging Cerenkov (MAGIC) telescope participated in three multiwavelength (MWL) campaigns, observing the blazar Markarian (Mkn) 421 during the nights of April 28 and 29, 2006, and June 14, 2006. Aims. We analyzed the corresponding MAGIC very-high energy observations during 9 nights from April 22 to 30, 2006 and on June 14, 2006. We inferred light curves with sub-day resolution and night-by-night energy spectra. Methods. MAGIC detects γ-rays by observing extended air showers in the atmosphere. The obtained air-shower images were analyzed using the standard MAGIC analysis chain. Results. A strong γ-ray signal was detected from Mkn 421 on all observation nights. The flux (E > 250 GeV) varied on night-by-night basis between (0.92±0.11) × 10-10 cm-2 s-1 (0.57 Crab units) and (3.21±0.15) × 10-10 cm-2 s-1 (2.0 Crab units) in April 2006. There is a clear indication for intra-night variability with a doubling time of 36± min on the night of April 29, 2006, establishing once more rapid flux variability for this object. For all individual nights γ-ray spectra could be inferred, with power-law indices ranging from 1.66 to 2.47. We did not find statistically significant correlations between the spectral index and the flux state for individual nights. During the June 2006 campaign, a flux substantially lower than the one measured by the Whipple 10-m telescope four days later was found. Using a log-parabolic power law fit we deduced for some data sets the location of the spectral peak in the very-high energy regime. Our results confirm the indications of rising peak energy with increasing flux, as expected in leptonic acceleration models.
Resumo:
A modem software development requires quick results and excellent quality, which leads to high demand for reusability in design and implementation of software components. The purpose of this thesis was to design and implement a reusable framework for portal front ends, including common portal features, such as authentication and authorization. The aim was also to evaluate frameworks as components of reuse and compare them to other reuse techniques. As the result of this thesis, a goo'd picture of framework's life cycle, problem domain and the actual implementation process of the framework, was obtained. It was also found out that frameworks fit well to solve recurrent and similar problems in a restricted problem domain. The outcome of this thesis was a prototype of a generic framework and an example application built on it. The implemented framework offered an abstract base for the portal front ends, using object-oriented methods and wellknown design patterns. The example application demonstrated the speed and ease of the application development based on the application frameworks.
Resumo:
The study of technology transfer in pottery production to the periphery of the Mycenaean world has been addressed by considering two different areas, southern Italy and central Macedonia. Technological features such as ceramic paste, decoration and firing have been determined for different ceramic groups established according to provenance criteria. The studies of technology and provenance have been performed following an archaeometric approach, using neutron activation analysis, petrographic analysis, X-ray diffraction and scanning electron microscopy. The results have revealed the existence of two different models. On the one hand, southern Italy seems to exhibit a more organized pottery production, which follows a Mycenaean-like technology, while in central Macedonia production is probably more varied, being based in part on the technology of the local tradition.
Resumo:
Tutkimuksessa selvitettiin riskitiedon siirtoon ja hyödyntämiseen käytettyjä työkaluja ja toimintatapoja projektin myynti- ja toteutusvaiheessa. Toisena tavoitteena oli tuoda esille projektiympäristössä riskitiedon tehokkaaseen siirtämiseen ja hyödyntämiseen vaikuttavat seikat. Työssä peilataan kirjallisuudessa esitettyjä riskienhallinnan käytäntöjä yrityksen riskienhallinnan toimintatapoihin ja etsitään sopivia tapoja muokata nykyisiä riskitiedon hyödyntämistapoja tehokkaammaksi. Tutkimus toteutettiin tapaustutkimuksena ja aineistoa kerättiin syventävän haastattelun avulla, päivittämällä riskien tarkastuslista, tutkimalla järjestelmistä saatavia dokumentteja sekä GPSII -kehitysyhteistyöhankkeen tuloksia. Tutkimuksen tuloksena löydettiin riskitiedon hyödyntämiseen liittyviä ongelmia. Lisäksi todettiin selkeä tarve kehittää muutoksenhallintaan liittyvää tietokantajärjestelmää että tarve lisätä riskien tarkastuslistan läpikäyntikertojen määrää ja säännöllisyyttä.
Resumo:
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing.
Resumo:
To the editor; The Visa Qualifying Examination is a two-day test composed of approximately 950 multiple-choice questions conerneing the basic and clinical sciences....
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Contact stains recovered at break-in crime scenes are frequently characterized by mixtures of DNA from several persons. Broad knowledge on the relative contribution of DNA left behind by different users overtime is of paramount importance. Such information might help crime investigators to robustly evaluate the possibility of detecting a specific (or known) individual's DNA profile based on the type and history of an object. To address this issue, a contact stain simulation-based protocol was designed. Fourteen volunteers either acting as first or second object's users were recruited. The first user was required to regularly handle/wear 9 different items during an 8-10-day period, whilst the second user for 5, 30 and 120 min, in three independent simulation sessions producing a total of 231 stains. Subsequently, the relative DNA profile contribution of each individual pair was investigated. Preliminary results showed a progressive increase of the percentage contribution of the second user compared to the first. Interestingly, the second user generally became the major DNA contributor when most objects were handled/worn for 120 min, Furthermore, the observation of unexpected additional alleles will then prompt the investigation of indirect DNA transfer events.
Resumo:
The genetic aetiology of congenital hypopituitarism (CH) is not entirely elucidated. FGFR1 and PROKR2 loss-of-function mutations are classically involved in hypogonadotrophic hypogonadism (HH), however, due to the clinical and genetic overlap of HH and CH; these genes may also be involved in the pathogenesis of CH. Using a candidate gene approach, we screened 156 Brazilian patients with combined pituitary hormone deficiencies (CPHD) for loss-of-function mutations in FGFR1 and PROKR2. We identified three FGFR1 variants (p.Arg448Trp, p.Ser107Leu and p.Pro772Ser) in four unrelated patients (two males) and two PROKR2 variants (p.Arg85Cys and p.Arg248Glu) in two unrelated female patients. Five of the six patients harbouring the variants had a first-degree relative that was an unaffected carrier of it. Results of functional studies indicated that the new FGFR1 variant p.Arg448Trp is a loss-of-function variant, while p.Ser107Leu and p.Pro772Ser present signalling activity similar to the wild-type form. Regarding PROKR2 variants, results from previous functional studies indicated that p.Arg85Cys moderately compromises receptor signalling through both MAPK and Ca(2) (+) pathways while p.Arg248Glu decreases calcium mobilization but has normal MAPK activity. The presence of loss-of-function variants of FGFR1 and PROKR2 in our patients with CPHD is indicative of an adjuvant and/or modifier effect of these rare variants on the phenotype. The presence of the same variants in unaffected relatives implies that they cannot solely cause the phenotype. Other associated genetic and/or environmental modifiers may play a role in the aetiology of this condition.
Resumo:
Previous functional MRI (fMRI) studies have associated anterior hippocampus with imagining and recalling scenes, imagining the future, recalling autobiographical memories and visual scene perception. We have observed that this typically involves the medial rather than the lateral portion of the anterior hippocampus. Here, we investigated which specific structures of the hippocampus underpin this observation. We had participants imagine novel scenes during fMRI scanning, as well as recall previously learned scenes from two different time periods (one week and 30 min prior to scanning), with analogous single object conditions as baselines. Using an extended segmentation protocol focussing on anterior hippocampus, we first investigated which substructures of the hippocampus respond to scenes, and found both imagination and recall of scenes to be associated with activity in presubiculum/parasubiculum, a region associated with spatial representation in rodents. Next, we compared imagining novel scenes to recall from one week or 30 min before scanning. We expected a strong response to imagining novel scenes and 1-week recall, as both involve constructing scene representations from elements stored across cortex. By contrast, we expected a weaker response to 30-min recall, as representations of these scenes had already been constructed but not yet consolidated. Both imagination and 1-week recall of scenes engaged anterior hippocampal structures (anterior subiculum and uncus respectively), indicating possible roles in scene construction. By contrast, 30-min recall of scenes elicited significantly less activation of anterior hippocampus but did engage posterior CA3. Together, these results elucidate the functions of different parts of the anterior hippocampus, a key brain area about which little is definitely known.
Resumo:
Humans like some colours and dislike others, but which particular colours and why remains to be understood. Empirical studies on colour preferences generally targeted most preferred colours, but rarely least preferred (disliked) colours. In addition, findings are often based on general colour preferences leaving open the question whether results generalise to specific objects. Here, 88 participants selected the colours they preferred most and least for three context conditions (general, interior walls, t-shirt) using a high-precision colour picker. Participants also indicated whether they associated their colour choice to a valenced object or concept. The chosen colours varied widely between individuals and contexts and so did the reasons for their choices. Consistent patterns also emerged, as most preferred colours in general were more chromatic, while for walls they were lighter and for t-shirts they were darker and less chromatic compared to least preferred colours. This meant that general colour preferences could not explain object specific colour preferences. Measures of the selection process further revealed that, compared to most preferred colours, least preferred colours were chosen more quickly and were less often linked to valenced objects or concepts. The high intra- and inter-individual variability in this and previous reports furthers our understanding that colour preferences are determined by subjective experiences and that most and least preferred colours are not processed equally.
Resumo:
Diplomityön tavoitteena oli Aker Yardsin Turun telakan tasolohkolinjan läpäisyajan parantaminen. Työssä selvitettiin lohkotehtaan nykyinen läpimenoaika ja pyrittiin parantamaan sitä Lean management periaatteita soveltaen. Työssä suoritettiin perusteellinen työntutkimus yhdessä työnjohdon kanssa, jossa selvitettiin eri työvaiheiden läpimenoajat ja työtunnit. Tuloksista löydettiin kolme selvää kehityskohdetta, joilla saadaan nykyistä läpimenoaikaa ja alihankintakustannuksia pienemmäksi. Nämä kehityskohteet olivat jäykkääjien asennuksen ja hitsauksen parantaminen, myynnin jälkeisten töiden minimointi sekä nostotukien asennuksen tehostaminen. Lisäksi tutkimustyön ohella havaittiin neljäs kehityskohde, osavalmistuksen ajoittaminen oikeaan aikaan. Tämä ei tullut ilmi tutkimustuloksista vaan ihmisiä haastattelemalla. Kehityskohtien tunnistamisen perusteella luotiin investointiehdotus jäykkääjien toiminnan tehostamiseen, sekä uusia toimintatapoja myynnin jälkeisten töiden minimoimiseksi ja nostotukien asennuksen tehostamiseksi. Osavalmistuksen ajoittamisella oikeaan aikaan, pyritään tehostamaan toimintaa lohkonkoontipaikalla.
Resumo:
We propose a probabilistic object classifier for outdoor scene analysis as a first step in solving the problem of scene context generation. The method begins with a top-down control, which uses the previously learned models (appearance and absolute location) to obtain an initial pixel-level classification. This information provides us the core of objects, which is used to acquire a more accurate object model. Therefore, their growing by specific active regions allows us to obtain an accurate recognition of known regions. Next, a stage of general segmentation provides the segmentation of unknown regions by a bottom-strategy. Finally, the last stage tries to perform a region fusion of known and unknown segmented objects. The result is both a segmentation of the image and a recognition of each segment as a given object class or as an unknown segmented object. Furthermore, experimental results are shown and evaluated to prove the validity of our proposal
Resumo:
This dissertation is based on four articles dealing with modeling of ozonation. The literature part of this considers some models for hydrodynamics in bubble column simulation. A literature review of methods for obtaining mass transfer coefficients is presented. The methods presented to obtain mass transfer are general models and can be applied to any gas-liquid system. Ozonation reaction models and methods for obtaining stoichiometric coefficients and reaction rate coefficients for ozonation reactions are discussed in the final section of the literature part. In the first article, ozone gas-liquid mass transfer into water in a bubble column was investigated for different pH values. A more general method for estimation of mass transfer and Henry’s coefficient was developed from the Beltrán method. The ozone volumetric mass transfer coefficient and the Henry’s coefficient were determined simultaneously by parameter estimation using a nonlinear optimization method. A minor dependence of the Henry’s law constant on pH was detected at the pH range 4 - 9. In the second article, a new method using the axial dispersion model for estimation of ozone self-decomposition kinetics in a semi-batch bubble column reactor was developed. The reaction rate coefficients for literature equations of ozone decomposition and the gas phase dispersion coefficient were estimated and compared with the literature data. The reaction order in the pH range 7-10 with respect to ozone 1.12 and 0.51 the hydroxyl ion were obtained, which is in good agreement with literature. The model parameters were determined by parameter estimation using a nonlinear optimization method. Sensitivity analysis was conducted using object function method to obtain information about the reliability and identifiability of the estimated parameters. In the third article, the reaction rate coefficients and the stoichiometric coefficients in the reaction of ozone with the model component p-nitrophenol were estimated at low pH of water using nonlinear optimization. A novel method for estimation of multireaction model parameters in ozonation was developed. In this method the concentration of unknown intermediate compounds is presented as a residual COD (chemical oxygen demand) calculated from the measured COD and the theoretical COD for the known species. The decomposition rate of p-nitrophenol on the pathway producing hydroquinone was found to be about two times faster than the p-nitrophenol decomposition rate on the pathway producing 4- nitrocatechol. In the fourth article, the reaction kinetics of p-nitrophenol ozonation was studied in a bubble column at pH 2. Using the new reaction kinetic model presented in the previous article, the reaction kinetic parameters, rate coefficients, and stoichiometric coefficients as well as the mass transfer coefficient were estimated with nonlinear estimation. The decomposition rate of pnitrophenol was found to be equal both on the pathway producing hydroquinone and on the path way producing 4-nitrocathecol. Comparison of the rate coefficients with the case at initial pH 5 indicates that the p-nitrophenol degradation producing 4- nitrocathecol is more selective towards molecular ozone than the reaction producing hydroquinone. The identifiability and reliability of the estimated parameters were analyzed with the Marcov chain Monte Carlo (MCMC) method. @All rights reserved. No part of the publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.
Resumo:
There is an increasing reliance on computers to solve complex engineering problems. This is because computers, in addition to supporting the development and implementation of adequate and clear models, can especially minimize the financial support required. The ability of computers to perform complex calculations at high speed has enabled the creation of highly complex systems to model real-world phenomena. The complexity of the fluid dynamics problem makes it difficult or impossible to solve equations of an object in a flow exactly. Approximate solutions can be obtained by construction and measurement of prototypes placed in a flow, or by use of a numerical simulation. Since usage of prototypes can be prohibitively time-consuming and expensive, many have turned to simulations to provide insight during the engineering process. In this case the simulation setup and parameters can be altered much more easily than one could with a real-world experiment. The objective of this research work is to develop numerical models for different suspensions (fiber suspensions, blood flow through microvessels and branching geometries, and magnetic fluids), and also fluid flow through porous media. The models will have merit as a scientific tool and will also have practical application in industries. Most of the numerical simulations were done by the commercial software, Fluent, and user defined functions were added to apply a multiscale method and magnetic field. The results from simulation of fiber suspension can elucidate the physics behind the break up of a fiber floc, opening the possibility for developing a meaningful numerical model of the fiber flow. The simulation of blood movement from an arteriole through a venule via a capillary showed that the model based on VOF can successfully predict the deformation and flow of RBCs in an arteriole. Furthermore, the result corresponds to the experimental observation illustrates that the RBC is deformed during the movement. The concluding remarks presented, provide a correct methodology and a mathematical and numerical framework for the simulation of blood flows in branching. Analysis of ferrofluids simulations indicate that the magnetic Soret effect can be even higher than the conventional one and its strength depends on the strength of magnetic field, confirmed experimentally by Völker and Odenbach. It was also shown that when a magnetic field is perpendicular to the temperature gradient, there will be additional increase in the heat transfer compared to the cases where the magnetic field is parallel to the temperature gradient. In addition, the statistical evaluation (Taguchi technique) on magnetic fluids showed that the temperature and initial concentration of the magnetic phase exert the maximum and minimum contribution to the thermodiffusion, respectively. In the simulation of flow through porous media, dimensionless pressure drop was studied at different Reynolds numbers, based on pore permeability and interstitial fluid velocity. The obtained results agreed well with the correlation of Macdonald et al. (1979) for the range of actual flow Reynolds studied. Furthermore, calculated results for the dispersion coefficients in the cylinder geometry were found to be in agreement with those of Seymour and Callaghan.