914 resultados para Multiple-scale processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seminal multiple view stereo benchmark evaluations from Middlebury and by Strecha et al. have played a major role in propelling the development of multi-view stereopsis methodology. Although seminal, these benchmark datasets are limited in scope with few reference scenes. Here, we try to take these works a step further by proposing a new multi-view stereo dataset, which is an order of magnitude larger in number of scenes and with a significant increase in diversity. Specifically, we propose a dataset containing 80 scenes of large variability. Each scene consists of 49 or 64 accurate camera positions and reference structured light scans, all acquired by a 6-axis industrial robot. To apply this dataset we propose an extension of the evaluation protocol from the Middlebury evaluation, reflecting the more complex geometry of some of our scenes. The proposed dataset is used to evaluate the state of the art multiview stereo algorithms of Tola et al., Campbell et al. and Furukawa et al. Hereby we demonstrate the usability of the dataset as well as gain insight into the workings and challenges of multi-view stereopsis. Through these experiments we empirically validate some of the central hypotheses of multi-view stereopsis, as well as determining and reaffirming some of the central challenges.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In our recent work in different bioreactors up to 2.5L in scale, we have successfully cultured hMSCs using the minimum agitator speed required for complete microcarrier suspension, N JS. In addition, we also reported a scaleable protocol for the detachment from microcarriers in spinner flasks of hMSCs from two donors. The essence of the protocol is the use of a short period of intense agitation in the presence of enzymes such that the cells are detached; but once detachment is achieved, the cells are smaller than the Kolmogorov scale of turbulence and hence not damaged. Here, the same approach has been effective for culture at N JS and detachment in-situ in 15mL ambr™ bioreactors, 100mL spinner flasks and 250mL Dasgip bioreactors. In these experiments, cells from four different donors were used along with two types of microcarrier with and without surface coatings (two types), four different enzymes and three different growth media (with and without serum), a total of 22 different combinations. In all cases after detachment, the cells were shown to retain their desired quality attributes and were able to proliferate. This agitation strategy with respect to culture and harvest therefore offers a sound basis for a wide range of scales of operation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The production of recombinant therapeutic proteins is an active area of research in drug development. These bio-therapeutic drugs target nearly 150 disease states and promise to bring better treatments to patients. However, if new bio-therapeutics are to be made more accessible and affordable, improvements in production performance and optimization of processes are necessary. A major challenge lies in controlling the effect of process conditions on production of intact functional proteins. To achieve this, improved tools are needed for bio-processing. For example, implementation of process modeling and high-throughput technologies can be used to achieve quality by design, leading to improvements in productivity. Commercially, the most sought after targets are secreted proteins due to the ease of handling in downstream procedures. This chapter outlines different approaches for production and optimization of secreted proteins in the host Pichia pastoris. © 2012 Springer Science+business Media, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In 2008, the Anticholinergic Cognitive Burden (ACB) scale was generated through a combination of laboratory data, literature review, and expert opinion. This scale identified an increased risk in mortality and worsening cognitive function in multiple populations, including 13,000 older adults in the United Kingdom. We present an updated scale based on new information and new medications available to the market. Methods: We conducted a systematic review for publications recognizing medications with adverse cognitive effects due to anti-cholinergic properties and found no new medications since 2008.Therefore we identified medications from a review of newly ap-proved medications since 2008 and medications identified throughthe clinical experience of the authors. To be included in the updatedACB scale, medications must have met the following criteria; ACBscore of 1: evidence from in vitro data that the medication has antag-onist activity at muscarinic receptors; ACB score of 2: evidence fromliterature, prescriber’s information, or expert opinion of clinical anti-cholinergic effect; ACB score of 3: evidence from literature, pre-scriber’s information, or expert opinion of the medication causingdelirium. Results: The reviewer panel included two geriatric pharmacists,one geriatric psychiatrist, one geriatrician, and one hospitalist.Twenty-three medications were eligible for review and possible inclu-sion in the updated ACB scale. Of these, seven medications were ex-cluded due to a lack of evidence for anticholinergic activity. Of the re-maining 16 medications, ten had laboratory evidence ofanticholinergic activity and added to the ACB list with a score of one.One medication was added with a score of two. Five medicationswere included in the ACB scale with a score of three.Conclusions: The revised ACB scale provides an update of med-ications with anticholinergic effects that may increase the risk of cog-nitive impairment. Future updates will be routinely conducted tomaintain an applicable library of medications for use in clinical andresearch environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Removal of dissolved salts and toxic chemicals in water, especially at a few parts per million (ppm) levels is one of the most difficult problems. There are several methods used for water purification. The choice of the method depends mainly on the level of feed water salinity, source of energy and type of contaminants present. Distillation is an age old method which can remove all types of dissolved impurities from contaminated water. In multiple effect distillation (MED) latent heat of steam is recycled several times to produce many units of distilled water with one unit of primary steam input. This is already being used in large capacity plants for treating sea water. But the challenge lies in designing a system for small scale operations that can treat a few cubic meters of water per day, especially suitable for rural communities where the available water is brackish. A small scale MED unit with an extendable number of effects has been designed and analyzed for optimum yield in terms of total distillate produced. © 2010 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For the treatment and monitoring of Parkinson's disease (PD) to be scientific, a key requirement is that measurement of disease stages and severity is quantitative, reliable, and repeatable. The last 50 years in PD research have been dominated by qualitative, subjective ratings obtained by human interpretation of the presentation of disease signs and symptoms at clinical visits. More recently, “wearable,” sensor-based, quantitative, objective, and easy-to-use systems for quantifying PD signs for large numbers of participants over extended durations have been developed. This technology has the potential to significantly improve both clinical diagnosis and management in PD and the conduct of clinical studies. However, the large-scale, high-dimensional character of the data captured by these wearable sensors requires sophisticated signal processing and machine-learning algorithms to transform it into scientifically and clinically meaningful information. Such algorithms that “learn” from data have shown remarkable success in making accurate predictions for complex problems in which human skill has been required to date, but they are challenging to evaluate and apply without a basic understanding of the underlying logic on which they are based. This article contains a nontechnical tutorial review of relevant machine-learning algorithms, also describing their limitations and how these can be overcome. It discusses implications of this technology and a practical road map for realizing the full potential of this technology in PD research and practice. © 2016 International Parkinson and Movement Disorder Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A versenyképesség, illetve a gazdaságos működés elengedhetetlen feltétele a fogyasztói elégedettség, melynek egyik meghatározó eleme az észlelt és elvárt minőség közti kapcsolat. A minőségi elvárások az internettel, mint napjaink egyik meghatározó csatornájával kapcsolatban is megfogalmazódtak már, így kapott jelentős szerepet az online szolgáltatásminőség meghatározása, illetve ezzel összekapcsolódva az online-fogyasztói elégedettségmérés. A tanulmány célja, hogy szakirodalmi áttekintést nyújtson a témában, és a szakirodalomból ismert E-S-QUAL és E-RecS-QUAL online-fogyasztói elégedettségmérésre szolgáló skálát megvizsgálja, érvényességét a magyar körülmények között letesztelje, és a szükségesnek látszó módosítások elvégzésével egy Magyarországon használható skálát hozzon létre. Az online-fogyasztók elégedettségmérésének alapjaként az online szolgáltatásminőség fogyasztói érzékelésével, illetve értékelésével kapcsolatos elméleteket járja körbe a tanulmány, és ezután kerül sor a különböző mérési módszerek bemutatására, kiemelt szerepet szánva az E-S-QUAL és E-RecS-QUAL skálának, mely az egyik leginkább alkalmazott módszernek számít. Az áttekintés középpontjában azok a honlapok állnak, melyeken vásárolni is lehet, a kutatást pedig az egyik jelentős hazai online könyvesbolt ügyfélkörében végeztem el. ______ Over the last decade the business-to-consumer online market has been growing very fast. In marketing literature a lot of studies have been created focusing on understanding and measuring e-service quality (e-sq) and online-customer satisfaction. The aim of the study is to summarize these concepts, analyse the relationship between e-sq and customer’s loyalty, which increases the competitiveness of the companies, and to create a valid and reliable scale to the Hungarian market for measuring online-customer satisfaction. The base of the empirical study is the E-S-QUAL and its second scale the E-RecS-QUAL that are widely used multiple scales measuring e-sq with seven dimensions: efficiency, system availability, fulfilment, privacy, responsiveness, compensation, and contact. The study is focusing on the websites customers use to shop online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to investigate the effects of direct instruction in story grammar on the reading and writing achievement of second graders. Three aspects of story grammar (character, setting, and plot) were taught with direct instruction using the concept development technique of deep processing. Deep processing which included (a) visualization (the drawing of pictures), (b) verbalization (the writing of sentences), (c) the attachment of physical sensations, and (d) the attachment of emotions to concepts was used to help students make mental connections necessary for recall and application of character, setting, and plot when constructing meaning in reading and writing.^ Four existing classrooms consisting of seventy-seven second-grade students were randomly assigned to two treatments, experimental and comparison. Both groups were pretested and posttested for reading achievement using the Gates-MacGinitie Reading Tests. Pretest and posttest writing samples were collected and evaluated. Writing achievement was measured using (a) a primary trait scoring scale (an adapted version of the Glazer Narrative Composition Scale) and (b) an holistic scoring scale by R. J. Pritchard. ANCOVAs were performed on the posttests adjusted for the pretests to determine whether or not the methods differed. There was no significant improvement in reading after the eleven-day experimental period for either group; nor did the two groups differ. There was significant improvement in writing for the experimental group over the comparison group. Pretreatment and posttreatment interviews were selectively collected to evaluate qualitatively if the students were able to identify and manipulate elements of story grammar and to determine patterns in metacognitive processing. Interviews provided evidence that most students in the experimental group gained while most students in the comparison group did not gain in their ability to manipulate, with understanding, the concepts of character, setting, and plot. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary aim of this dissertation is to develop data mining tools for knowledge discovery in biomedical data when multiple (homogeneous or heterogeneous) sources of data are available. The central hypothesis is that, when information from multiple sources of data are used appropriately and effectively, knowledge discovery can be better achieved than what is possible from only a single source. ^ Recent advances in high-throughput technology have enabled biomedical researchers to generate large volumes of diverse types of data on a genome-wide scale. These data include DNA sequences, gene expression measurements, and much more; they provide the motivation for building analysis tools to elucidate the modular organization of the cell. The challenges include efficiently and accurately extracting information from the multiple data sources; representing the information effectively, developing analytical tools, and interpreting the results in the context of the domain. ^ The first part considers the application of feature-level integration to design classifiers that discriminate between soil types. The machine learning tools, SVM and KNN, were used to successfully distinguish between several soil samples. ^ The second part considers clustering using multiple heterogeneous data sources. The resulting Multi-Source Clustering (MSC) algorithm was shown to have a better performance than clustering methods that use only a single data source or a simple feature-level integration of heterogeneous data sources. ^ The third part proposes a new approach to effectively incorporate incomplete data into clustering analysis. Adapted from K-means algorithm, the Generalized Constrained Clustering (GCC) algorithm makes use of incomplete data in the form of constraints to perform exploratory analysis. Novel approaches for extracting constraints were proposed. For sufficiently large constraint sets, the GCC algorithm outperformed the MSC algorithm. ^ The last part considers the problem of providing a theme-specific environment for mining multi-source biomedical data. The database called PlasmoTFBM, focusing on gene regulation of Plasmodium falciparum, contains diverse information and has a simple interface to allow biologists to explore the data. It provided a framework for comparing different analytical tools for predicting regulatory elements and for designing useful data mining tools. ^ The conclusion is that the experiments reported in this dissertation strongly support the central hypothesis.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Deccan Trap basalts are the remnants of a massive series of lava flows that erupted at the K/T boundary and covered 1-2 million km2 of west-central India. This eruptive event is of global interest because of its possible link to the major mass extinction event, and there is much debate about the duration of this massive volcanic event. In contrast to isotopic or paleomagnetic dating methods, I explore an alternative approach to determine the lifecycle of the magma chambers that supplied the lavas, and extend the concept to obtain a tighter constraint on Deccan’s duration. My method relies on extracting time information from elemental and isotopic diffusion across zone boundaries in individual crystals. I determined elemental and Sr-isotopic variations across abnormally large (2-5 cm) plagioclase crystals from the Thalghat and Kashele “Giant Plagioclase Basalts” from the lowermost Jawhar and Igatpuri Formations respectively in the thickest Western Ghats section near Mumbai. I also obtained bulk rock major, trace and rare earth element chemistry of each lava flow from the two formations. Thalghat flows contain only 12% zoned crystals, with 87 Sr/86Sr ratios of 0.7096 in the core and 0.7106 in the rim, separated by a sharp boundary. In contrast, all Kashele crystals have a wider range of 87Sr/86Sr values, with multiple zones. Geochemical modeling of the data suggests that the two types of crystals grew in distinct magmatic environments. Modeling intracrystalline diffusive equilibration between the core and rim of Thalghat crystals led me to obtain a crystal growth rate of 2.03x10-10 cm/s and a residence time of 780 years for the crystals in the magma chamber(s). Employing some assumptions based on field and geochronologic evidence, I extrapolated this residence time to the entire Western Ghats and obtained an estimate of 25,000–35,000 years for the duration of Western Ghats volcanism. This gave an eruptive rate of 30–40 km3/yr, which is much higher than any presently erupting volcano. This result will remain speculative until a similarly detailed analytical-modeling study is performed for the rest of the Western Ghats formations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Carbon nanotubes (CNT) could serve as potential reinforcement for metal matrix composites for improved mechanical properties. However dispersion of carbon nanotubes (CNT) in the matrix has been a longstanding problem, since they tend to form clusters to minimize their surface area. The aim of this study was to use plasma and cold spraying techniques to synthesize CNT reinforced aluminum composite with improved dispersion and to quantify the degree of CNT dispersion as it influences the mechanical properties. Novel method of spray drying was used to disperse CNTs in Al-12 wt.% Si prealloyed powder, which was used as feedstock for plasma and cold spraying. A new method for quantification of CNT distribution was developed. Two parameters for CNT dispersion quantification, namely Dispersion parameter (DP) and Clustering Parameter (CP) have been proposed based on the image analysis and distance between the centers of CNTs. Nanomechanical properties were correlated with the dispersion of CNTs in the microstructure. Coating microstructure evolution has been discussed in terms of splat formation, deformation and damage of CNTs and CNT/matrix interface. Effect of Si and CNT content on the reaction at CNT/matrix interface was thermodynamically and kinetically studied. A pseudo phase diagram was computed which predicts the interfacial carbide for reaction between CNT and Al-Si alloy at processing temperature. Kinetic aspects showed that Al4C3 forms with Al-12 wt.% Si alloy while SiC forms with Al-23wt.% Si alloy. Mechanical properties at nano, micro and macro-scale were evaluated using nanoindentation and nanoscratch, microindentation and bulk tensile testing respectively. Nano and micro-scale mechanical properties (elastic modulus, hardness and yield strength) displayed improvement whereas macro-scale mechanical properties were poor. The inversion of the mechanical properties at different scale length was attributed to the porosity, CNT clustering, CNT-splat adhesion and Al 4C3 formation at the CNT/matrix interface. The Dispersion parameter (DP) was more sensitive than Clustering parameter (CP) in measuring degree of CNT distribution in the matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel processing is prevalent in many manufacturing and service systems. Many manufactured products are built and assembled from several components fabricated in parallel lines. An example of this manufacturing system configuration is observed at a manufacturing facility equipped to assemble and test web servers. Characteristics of a typical web server assembly line are: multiple products, job circulation, and paralleling processing. The primary objective of this research was to develop analytical approximations to predict performance measures of manufacturing systems with job failures and parallel processing. The analytical formulations extend previous queueing models used in assembly manufacturing systems in that they can handle serial and different configurations of paralleling processing with multiple product classes, and job circulation due to random part failures. In addition, appropriate correction terms via regression analysis were added to the approximations in order to minimize the gap in the error between the analytical approximation and the simulation models. Markovian and general type manufacturing systems, with multiple product classes, job circulation due to failures, and fork and join systems to model parallel processing were studied. In the Markovian and general case, the approximations without correction terms performed quite well for one and two product problem instances. However, it was observed that the flow time error increased as the number of products and net traffic intensity increased. Therefore, correction terms for single and fork-join stations were developed via regression analysis to deal with more than two products. The numerical comparisons showed that the approximations perform remarkably well when the corrections factors were used in the approximations. In general, the average flow time error was reduced from 38.19% to 5.59% in the Markovian case, and from 26.39% to 7.23% in the general case. All the equations stated in the analytical formulations were implemented as a set of Matlab scripts. By using this set, operations managers of web server assembly lines, manufacturing or other service systems with similar characteristics can estimate different system performance measures, and make judicious decisions - especially setting delivery due dates, capacity planning, and bottleneck mitigation, among others.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard economic theory suggests that capital should flow from rich countries to poor countries. However, capital has predominantly flowed to rich countries. The three essays in this dissertation attempt to explain this phenomenon. The first two essays suggest theoretical explanations for why capital has not flowed to the poor countries. The third essay empirically tests the theoretical explanations.^ The first essay examines the effects of increasing returns to scale on international lending and borrowing with moral hazard. Introducing increasing returns in a two-country general equilibrium model yields possible multiple equilibria and helps explain the possibility of capital flows from a poor to a rich country. I find that a borrowing country may need to borrow sufficient amounts internationally to reach a minimum investment threshold in order to invest domestically.^ The second essay examines how a poor country may invest in sectors with low productivity because of sovereign risk, and how collateral differences across sectors may exacerbate the problem. I model sovereign borrowing with a two-sector economy: one sector with increasing returns to scale (IRS) and one sector with diminishing returns to scale (DRS). Countries with incomes below a threshold will only invest in the DRS sector, and countries with incomes above a threshold will invest mostly in the IRS sector. The results help explain the existence of a bimodal world income distribution.^ The third essay empirically tests the explanations for why capital has not flowed from the rich to the poor countries, with a focus on institutions and initial capital. I find that institutional variables are a very important factor, but in contrast to other studies, I show that institutions do not account for the Lucas Paradox. Evidence of increasing returns still exists, even when controlling for institutions and other variables. In addition, I find that the determinants of capital flows may depend on whether a country is rich or poor.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The freshwater Everglades is a complex system containing thousands of tree islands embedded within a marsh-grassland matrix. The tree island-marsh mosaic is shaped and maintained by hydrologic, edaphic and biological mechanisms that interact across multiple scales. Preserving tree islands requires a more integrated understanding of how scale-dependent phenomena interact in the larger freshwater system. The hierarchical patch dynamics paradigm provides a conceptual framework for exploring multi-scale interactions within complex systems. We used a three-tiered approach to examine the spatial variability and patterning of nutrients in relation to site parameters within and between two hydrologically defined Everglades landscapes: the freshwater Marl Prairie and the Ridge and Slough. Results were scale-dependent and complexly interrelated. Total carbon and nitrogen patterning were correlated with organic matter accumulation, driven by hydrologic conditions at the system scale. Total and bioavailable phosphorus were most strongly related to woody plant patterning within landscapes, and were found to be 3 to 11 times more concentrated in tree island soils compared to surrounding marshes. Below canopy resource islands in the slough were elongated in a downstream direction, indicating soil resource directional drift. Combined multi-scale results suggest that hydrology plays a significant role in landscape patterning and also the development and maintenance of tree islands. Once developed, tree islands appear to exert influence over the spatial distribution of nutrients, which can reciprocally affect other ecological processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding habitat selection and movement remains a key question in behavioral ecology. Yet, obtaining a sufficiently high spatiotemporal resolution of the movement paths of organisms remains a major challenge, despite recent technological advances. Observing fine-scale movement and habitat choice decisions in the field can prove to be difficult and expensive, particularly in expansive habitats such as wetlands. We describe the application of passive integrated transponder (PIT) systems to field enclosures for tracking detailed fish behaviors in an experimental setting. PIT systems have been applied to habitats with clear passageways, at fixed locations or in controlled laboratory and mesocosm settings, but their use in unconfined habitats and field-based experimental setups remains limited. In an Everglades enclosure, we continuously tracked the movement and habitat use of PIT-tagged centrarchids across three habitats of varying depth and complexity using multiple flatbed antennas for 14 days. Fish used all three habitats, with marked species-specific diel movement patterns across habitats, and short-lived movements that would be likely missed by other tracking techniques. Findings suggest that the application of PIT systems to field enclosures can be an insightful approach for gaining continuous, undisturbed and detailed movement data in unconfined habitats, and for experimentally manipulating both internal and external drivers of these behaviors.