931 resultados para Capture-recapture Data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Few works address methodological issues of how to conduct strategy-as-practice research and even fewer focus on how to analyse the subsequent data in ways that illuminate strategy as an everyday, social practice. We address this gap by proposing a quantitative method for analysing observational data, which can complement more traditional qualitative methodologies. We propose that rigorous but context-sensitive coding of transcripts can render everyday practice analysable statistically. Such statistical analysis provides a means for analytically representing patterns and shifts within the mundane, repetitive elements through which practice is accomplished. We call this approach the Event Database (EDB) and it consists of five basic coding categories that help us capture the stream of practice. Indexing codes help to index or categorise the data, in order to give context and offer some basic information about the event under discussion. Indexing codes are descriptive codes, which allow us to catalogue and classify events according to their assigned characteristics. Content codes are to do with the qualitative nature of the event; this is the essence of the event. It is a description that helps to inform judgements about the phenomenon. Nature codes help us distinguish between discursive and tangible events. We include this code to acknowledge that some events differ qualitatively from other events. Type events are codes abstracted from the data in order to help us classify events based on their description or nature. This involves significantly more judgement than the index codes but consequently is also more meaningful. Dynamics codes help us capture some of the movement or fluidity of events. This category has been included to let us capture the flow of activity over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Foley [J. Opt. Soc. Am. A 11 (1994) 1710] has proposed an influential psychophysical model of masking in which mask components in a contrast gain pool are raised to an exponent before summation and divisive inhibition. We tested this summation rule in experiments in which contrast detection thresholds were measured for a vertical 1 c/deg (or 2 c/deg) sine-wave component in the presence of a 3 c/deg (or 6 c/deg) mask that had either a single component oriented at -45° or a pair of components oriented at ±45°. Contrary to the predictions of Foley's model 3, we found that for masks of moderate contrast and above, threshold elevation was predicted by linear summation of the mask components in the inhibitory stage of the contrast gain pool. We built this feature into two new models, referred to as the early adaptation model and the hybrid model. In the early adaptation model, contrast adaptation controls a threshold-like nonlinearity on the output of otherwise linear pathways that provide the excitatory and inhibitory inputs to a gain control stage. The hybrid model involves nonlinear and nonadaptable routes to excitatory and inhibitory stages as well as an adaptable linear route. With only six free parameters, both models provide excellent fits to the masking and adaptation data of Foley and Chen [Vision Res. 37 (1997) 2779] but unlike Foley and Chen's model, are able to do so with only one adaptation parameter. However, only the hybrid model is able to capture the features of Foley's (1994) pedestal plus orthogonal fixed mask data. We conclude that (1) linear summation of inhibitory components is a feature of contrast masking, and (2) that the main aftereffect of spatial adaptation on contrast increment thresholds can be assigned to a single site. © 2002 Elsevier Science Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exploratory analysis of data seeks to find common patterns to gain insights into the structure and distribution of the data. In geochemistry it is a valuable means to gain insights into the complicated processes making up a petroleum system. Typically linear visualisation methods like principal components analysis, linked plots, or brushing are used. These methods can not directly be employed when dealing with missing data and they struggle to capture global non-linear structures in the data, however they can do so locally. This thesis discusses a complementary approach based on a non-linear probabilistic model. The generative topographic mapping (GTM) enables the visualisation of the effects of very many variables on a single plot, which is able to incorporate more structure than a two dimensional principal components plot. The model can deal with uncertainty, missing data and allows for the exploration of the non-linear structure in the data. In this thesis a novel approach to initialise the GTM with arbitrary projections is developed. This makes it possible to combine GTM with algorithms like Isomap and fit complex non-linear structure like the Swiss-roll. Another novel extension is the incorporation of prior knowledge about the structure of the covariance matrix. This extension greatly enhances the modelling capabilities of the algorithm resulting in better fit to the data and better imputation capabilities for missing data. Additionally an extensive benchmark study of the missing data imputation capabilities of GTM is performed. Further a novel approach, based on missing data, will be introduced to benchmark the fit of probabilistic visualisation algorithms on unlabelled data. Finally the work is complemented by evaluating the algorithms on real-life datasets from geochemical projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Exploratory analysis of petroleum geochemical data seeks to find common patterns to help distinguish between different source rocks, oils and gases, and to explain their source, maturity and any intra-reservoir alteration. However, at the outset, one is typically faced with (a) a large matrix of samples, each with a range of molecular and isotopic properties, (b) a spatially and temporally unrepresentative sampling pattern, (c) noisy data and (d) often, a large number of missing values. This inhibits analysis using conventional statistical methods. Typically, visualisation methods like principal components analysis are used, but these methods are not easily able to deal with missing data nor can they capture non-linear structure in the data. One approach to discovering complex, non-linear structure in the data is through the use of linked plots, or brushing, while ignoring the missing data. In this paper we introduce a complementary approach based on a non-linear probabilistic model. Generative topographic mapping enables the visualisation of the effects of very many variables on a single plot, while also dealing with missing data. We show how using generative topographic mapping also provides an optimal method with which to replace missing values in two geochemical datasets, particularly where a large proportion of the data is missing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to generate sales promotion response predictions, marketing analysts estimate demand models using either disaggregated (consumer-level) or aggregated (store-level) scanner data. Comparison of predictions from these demand models is complicated by the fact that models may accommodate different forms of consumer heterogeneity depending on the level of data aggregation. This study shows via simulation that demand models with various heterogeneity specifications do not produce more accurate sales response predictions than a homogeneous demand model applied to store-level data, with one major exception: a random coefficients model designed to capture within-store heterogeneity using store-level data produced significantly more accurate sales response predictions (as well as better fit) compared to other model specifications. An empirical application to the paper towel product category adds additional insights. This article has supplementary material online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Few works address methodological issues of how to conduct strategy-as-practice research and even fewer focus on how to analyse the subsequent data in ways that illuminate strategy as an everyday, social practice. We address this gap by proposing a quantitative method for analysing observational data, which can complement more traditional qualitative methodologies. We propose that rigorous but context-sensitive coding of transcripts can render everyday practice analysable statistically. Such statistical analysis provides a means for analytically representing patterns and shifts within the mundane, repetitive elements through which practice is accomplished. We call this approach the Event Database (EDB) and it consists of five basic coding categories that help us capture the stream of practice. Indexing codes help to index or categorise the data, in order to give context and offer some basic information about the event under discussion. Indexing codes are descriptive codes, which allow us to catalogue and classify events according to their assigned characteristics. Content codes are to do with the qualitative nature of the event; this is the essence of the event. It is a description that helps to inform judgements about the phenomenon. Nature codes help us distinguish between discursive and tangible events. We include this code to acknowledge that some events differ qualitatively from other events. Type events are codes abstracted from the data in order to help us classify events based on their description or nature. This involves significantly more judgement than the index codes but consequently is also more meaningful. Dynamics codes help us capture some of the movement or fluidity of events. This category has been included to let us capture the flow of activity over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the digital imaging results of a collaborative research project working toward the generation of an on-line interactive digital image database of signs from ancient cuneiform tablets. An important aim of this project is the application of forensic analysis to the cuneiform symbols to identify scribal hands. Cuneiform tablets are amongst the earliest records of written communication, and could be considered as one of the original information technologies; an accessible, portable and robust medium for communication across distance and time. The earliest examples are up to 5,000 years old, and the writing technique remained in use for some 3,000 years. Unfortunately, only a small fraction of these tablets can be made available for display in museums and much important academic work has yet to be performed on the very large numbers of tablets to which there is necessarily restricted access. Our paper will describe the challenges encountered in the 2D image capture of a sample set of tablets held in the British Museum, explaining the motivation for attempting 3D imaging and the results of initial experiments scanning the smaller, more densely inscribed cuneiform tablets. We will also discuss the tractability of 3D digital capture, representation and manipulation, and investigate the requirements for scaleable data compression and transmission methods. Additional information can be found on the project website: www.cuneiform.net

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presently monoethanolamine (MEA) remains the industrial standard solvent for CO2 capture processes. Operating issues relating to corrosion and degradation of MEA at high temperatures and concentrations, and in the presence of oxygen, in a traditional PCC process, have introduced the requisite for higher quality and costly stainless steels in the construction of capture equipment and the use of oxygen scavengers and corrosion inhibitors. While capture processes employing MEA have improved significantly in recent times there is a continued attraction towards alternative solvents systems which offer even more improvements. This movement includes aqueous amine blends which are gaining momentum as new generation solvents for CO2 capture processes. Given the exhaustive array of amines available to date endless opportunities exist to tune and tailor a solvent to deliver specific performance and physical properties in line with a desired capture process. The current work is focussed on the rationalisation of CO2 absorption behaviour in a series of aqueous amine blends incorporating monoethanolamine, N,N-dimethylethanolamine (DMEA), N,N-diethylethanolamine (DEEA) and 2-amino-2-methyl-1-propanol (AMP) as solvent components. Mass transfer/kinetic measurements have been performed using a wetted wall column (WWC) contactor at 40°C for a series of blends in which the blend properties including amine concentration, blend ratio, and CO2 loadings from 0.0-0.4 (moles CO2/total moles amine) were systematically varied and assessed. Equilibrium CO2 solubility in each of the blends has been estimated using a software tool developed in Matlab for the prediction of vapour liquid equilibrium using a combination of the known chemical equilibrium reactions and constants for the individual amine components which have been combined into a blend.From the CO2 mass transfer data the largest absorption rates were observed in blends containing 3M MEA/3M Am2 while the selection of the Am2 component had only a marginal impact on mass transfer rates. Overall, CO2 mass transfer in the fastest blends containing 3M MEA/3M Am2 was found to be only slightly lower than a 5M MEA solution at similar temperatures and CO2 loadings. In terms of equilibrium behaviour a slight decrease in the absorption capacity (moles CO2/mole amine) with increasing Am2 concentration in the blends with MEA was observed while cyclic capacity followed the opposite trend. Significant increases in cyclic capacity (26-111%) were observed in all blends when compared to MEA solutions at similar temperatures and total amine concentrations. In view of the reasonable compromise between CO2 absorption rate and capacity a blend containing 3M MEA and 3M AMP as blend components would represent a reasonable alternative in replacement of 5M MEA as a standalone solvent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the explosive growth of the volume and complexity of document data (e.g., news, blogs, web pages), it has become a necessity to semantically understand documents and deliver meaningful information to users. Areas dealing with these problems are crossing data mining, information retrieval, and machine learning. For example, document clustering and summarization are two fundamental techniques for understanding document data and have attracted much attention in recent years. Given a collection of documents, document clustering aims to partition them into different groups to provide efficient document browsing and navigation mechanisms. One unrevealed area in document clustering is that how to generate meaningful interpretation for the each document cluster resulted from the clustering process. Document summarization is another effective technique for document understanding, which generates a summary by selecting sentences that deliver the major or topic-relevant information in the original documents. How to improve the automatic summarization performance and apply it to newly emerging problems are two valuable research directions. To assist people to capture the semantics of documents effectively and efficiently, the dissertation focuses on developing effective data mining and machine learning algorithms and systems for (1) integrating document clustering and summarization to obtain meaningful document clusters with summarized interpretation, (2) improving document summarization performance and building document understanding systems to solve real-world applications, and (3) summarizing the differences and evolution of multiple document sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to estimate speed of free-ranging fishes using a passive sampling device is described and illustrated with data from the Everglades, U.S.A. Catch per unit effort (CPUE) from minnow traps embedded in drift fences was treated as an encounter rate and used to estimate speed, when combined with an independent estimate of density obtained by use of throw traps that enclose 1 m2 of marsh habitat. Underwater video was used to evaluate capture efficiency and species-specific bias of minnow traps and two sampling studies were used to estimate trap saturation and diel-movement patterns; these results were used to optimize sampling and derive correction factors to adjust species-specific encounter rates for bias and capture efficiency. Sailfin mollies Poecilia latipinna displayed a high frequency of escape from traps, whereas eastern mosquitofish Gambusia holbrooki were most likely to avoid a trap once they encountered it; dollar sunfish Lepomis marginatus were least likely to avoid the trap once they encountered it or to escape once they were captured. Length of sampling and time of day affected CPUE; fishes generally had a very low retention rate over a 24 h sample time and only the Everglades pygmy sunfish Elassoma evergladei were commonly captured at night. Dispersal speed of fishes in the Florida Everglades, U.S.A., was shown to vary seasonally and among species, ranging from 0· 05 to 0· 15 m s−1 for small poeciliids and fundulids to 0· 1 to 1· 8 m s−1 for L. marginatus. Speed was generally highest late in the wet season and lowest in the dry season, possibly tied to dispersal behaviours linked to finding and remaining in dry-season refuges. These speed estimates can be used to estimate the diffusive movement rate, which is commonly employed in spatial ecological models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper details the research methods an introductory qualitative research class used to both study an issue related to race and identity, and to familiarize themselves with data collection strategies. Throughout the paper the authors attempt to capture the challenges, disagreements, and consensus building that marked this unusual research endeavor.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data — users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.