991 resultados para Combining method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Municipality of Anchorage (MOA) is required to better manage, operate and control municipal solid waste (MSW) after the Anchorage Assembly instituted a Zero Waste Policy. Two household curbside recycling programs (CRPs), pay-as-you-throw (PAYT) and single-stream, were compared and evaluated to determine an optimal municipal solid waste diversion method for households within the MOA. The analyses find: (1) a CRP must be designed from comprehensive analysis, models and data correlation that combine demographic and psychographic variables; and (2) CRPs can be easily adjusted towards community-specific goals using technology, such as Geographic Information System (GIS) and Radio Frequency Identification (RFID). Combining resources of policy-makers, businesses, and other viable actors are necessary components to produce a sustainable, economically viable curbside recycling program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of 3D data in mobile robotics provides valuable information about the robot’s environment. Traditionally, stereo cameras have been used as a low-cost 3D sensor. However, the lack of precision and texture for some surfaces suggests that the use of other 3D sensors could be more suitable. In this work, we examine the use of two sensors: an infrared SR4000 and a Kinect camera. We use a combination of 3D data obtained by these cameras, along with features obtained from 2D images acquired from these cameras, using a Growing Neural Gas (GNG) network applied to the 3D data. The goal is to obtain a robust egomotion technique. The GNG network is used to reduce the camera error. To calculate the egomotion, we test two methods for 3D registration. One is based on an iterative closest points algorithm, and the other employs random sample consensus. Finally, a simultaneous localization and mapping method is applied to the complete sequence to reduce the global error. The error from each sensor and the mapping results from the proposed method are examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Univariate linkage analysis is used routinely to localise genes for human complex traits. Often, many traits are analysed but the significance of linkage for each trait is not corrected for multiple trait testing, which increases the experiment-wise type-I error rate. In addition, univariate analyses do not realise the full power provided by multivariate data sets. Multivariate linkage is the ideal solution but it is computationally intensive, so genome-wide analysis and evaluation of empirical significance are often prohibitive. We describe two simple methods that efficiently alleviate these caveats by combining P-values from multiple univariate linkage analyses. The first method estimates empirical pointwise and genome-wide significance between one trait and one marker when multiple traits have been tested. It is as robust as an appropriate Bonferroni adjustment, with the advantage that no assumptions are required about the number of independent tests performed. The second method estimates the significance of linkage between multiple traits and one marker and, therefore, it can be used to localise regions that harbour pleiotropic quantitative trait loci (QTL). We show that this method has greater power than individual univariate analyses to detect a pleiotropic QTL across different situations. In addition, when traits are moderately correlated and the QTL influences all traits, it can outperform formal multivariate VC analysis. This approach is computationally feasible for any number of traits and was not affected by the residual correlation between traits. We illustrate the utility of our approach with a genome scan of three asthma traits measured in families with a twin proband.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The importance of availability of comparable real income aggregates and their components to applied economic research is highlighted by the popularity of the Penn World Tables. Any methodology designed to achieve such a task requires the combination of data from several sources. The first is purchasing power parities (PPP) data available from the International Comparisons Project roughly every five years since the 1970s. The second is national level data on a range of variables that explain the behaviour of the ratio of PPP to market exchange rates. The final source of data is the national accounts publications of different countries which include estimates of gross domestic product and various price deflators. In this paper we present a method to construct a consistent panel of comparable real incomes by specifying the problem in state-space form. We present our completed work as well as briefly indicate our work in progress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report we discuss the problem of combining spatially-distributed predictions from neural networks. An example of this problem is the prediction of a wind vector-field from remote-sensing data by combining bottom-up predictions (wind vector predictions on a pixel-by-pixel basis) with prior knowledge about wind-field configurations. This task can be achieved using the scaled-likelihood method, which has been used by Morgan and Bourlard (1995) and Smyth (1994), in the context of Hidden Markov modelling

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis explores efforts to conjoin organisational contexts and capabilities in explaining sustainable competitive advantage. Oliver (1997) argued organisations need to balance the need to conform to industry’s requirements to attain legitimization (e.g. DiMaggio & Powell, 1983), and the need for resource optimization (e.g. Barney, 1991). The author hypothesized that such balance can be viewed as movements along the homogeneity-heterogeneity continuum. An organisation in a homogenous industry possesses similar characteristics as its competitors, as opposed to a heterogeneous industry in which organisations within are differentiated and competitively positioned (Oliver, 1997). The movement is influenced by the dynamic environmental conditions that an organisation is experiencing. The author extended Oliver’s (1997) propositions of combining RBV’s focus on capabilities with institutional theory’s focus on organisational context, as well as redefining organisational receptivity towards change (ORC) factors from Butler and Allen’s (2008) findings. The authors contributed to the theoretical development of ORC theory to explain the attainment of sustainable competitive advantage. ORC adopts the assumptions from both institutional and RBV theories, where the receptivity factors include both organisational contexts and capabilities. The thesis employed a mixed method approach in which sequential qualitative quantitative studies were deployed to establish a robust, reliable, and valid ORC scale. The adoption of Hinkin’s (1995) three-phase scale development process was updated, thus items generated from interviews and literature reviews went through numerous exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) to achieve convergent, discriminant, and nomological validities. Samples in the first phase (semi structured interviews) were hotel owners and managers. In the second phase, samples were MBA students, and employees of private and public sectors. In the third phase, samples were hotel managers. The final ORC scale is a parsimonious second higher-order latent construct. The first-order constructs comprises four latent receptivity factors which are ideological vision (4 items), leading change (4 items), implementation capacity (4 items), and change orientation (7 items). Hypotheses testing revealed that high levels of perceived environmental uncertainty leads to high levels of receptivity factor. Furthermore, the study found a strong positive correlation between receptivity factors and competitive advantage, and between receptivity factors and organisation performance. Mediation analyses revealed that receptivity factors partially mediate the relationship between perceived environmental uncertainty, competitive advantage and organisational performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT One of the current research trends in Enterprise Resource Planning (ERP) involves examining the critical factors for its successful implementation. However, such research is limited to system implementation, not focusing on the flexibility of ERP to respond to changes in business. Therefore, this study explores a combination system, made up of an ERP and informality, intended to provide organisations with efficient and flexible performance simultaneously. In addition, this research analyses the benefits and challenges of using the system. The research was based on socio-technical system (STS) theory which contains two dimensions: 1) a technical dimension which evaluates the performance of the system; and 2) a social dimension which examines the impact of the system on an organisation. A mixed method approach has been followed in this research. The qualitative part aims to understand the constraints of using a single ERP system, and to define a new system corresponding to these problems. To achieve this goal, four Chinese companies operating in different industries were studied, all of which faced challenges in using an ERP system due to complexity and uncertainty in their business environments. The quantitative part contains a discrete-event simulation study that is intended to examine the impact of operational performance when a company implements the hybrid system in a real-life situation. Moreover, this research conducts a further qualitative case study, the better to understand the influence of the system in an organisation. The empirical aspect of the study reveals that an ERP with pre-determined business activities cannot react promptly to unanticipated changes in a business. Incorporating informality into an ERP can react to different situations by using different procedures that are based on the practical knowledge of frontline employees. Furthermore, the simulation study shows that the combination system can achieve a balance between efficiency and flexibility. Unlike existing research, which emphasises a continuous improvement in the IT functions of an enterprise system, this research contributes to providing a definition of a new system in theory, which has mixed performance and contains both the formal practices embedded in an ERP and informal activities based on human knowledge. It supports both cost-efficiency in executing business transactions and flexibility in coping with business uncertainty.This research also indicates risks of using the system, such as using an ERP with limited functions; a high cost for performing informally; and a low system acceptance, owing to a shift in organisational culture. With respect to practical contribution, this research suggests that companies can choose the most suitable enterprise system approach in accordance with their operational strategies. The combination system can be implemented in a company that needs to operate a medium amount of volume and variety. By contrast, the traditional ERP system is better suited in a company that operates a high-level volume market, while an informal system is more suitable for a firm with a requirement for a high level of variety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An iterative method for computing the channel capacity of both discrete and continuous input, continuous output channels is proposed. The efficiency of new method is demonstrated in comparison with the classical Blahut - Arimoto algorithm for several known channels. Moreover, we also present a hybrid method combining advantages of both the Blahut - Arimoto algorithm and our iterative approach. The new method is especially efficient for the channels with a priory unknown discrete input alphabet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: DNA-binding proteins play a pivotal role in various intra- and extra-cellular activities ranging from DNA replication to gene expression control. Identification of DNA-binding proteins is one of the major challenges in the field of genome annotation. There have been several computational methods proposed in the literature to deal with the DNA-binding protein identification. However, most of them can't provide an invaluable knowledge base for our understanding of DNA-protein interactions. Results: We firstly presented a new protein sequence encoding method called PSSM Distance Transformation, and then constructed a DNA-binding protein identification method (SVM-PSSM-DT) by combining PSSM Distance Transformation with support vector machine (SVM). First, the PSSM profiles are generated by using the PSI-BLAST program to search the non-redundant (NR) database. Next, the PSSM profiles are transformed into uniform numeric representations appropriately by distance transformation scheme. Lastly, the resulting uniform numeric representations are inputted into a SVM classifier for prediction. Thus whether a sequence can bind to DNA or not can be determined. In benchmark test on 525 DNA-binding and 550 non DNA-binding proteins using jackknife validation, the present model achieved an ACC of 79.96%, MCC of 0.622 and AUC of 86.50%. This performance is considerably better than most of the existing state-of-the-art predictive methods. When tested on a recently constructed independent dataset PDB186, SVM-PSSM-DT also achieved the best performance with ACC of 80.00%, MCC of 0.647 and AUC of 87.40%, and outperformed some existing state-of-the-art methods. Conclusions: The experiment results demonstrate that PSSM Distance Transformation is an available protein sequence encoding method and SVM-PSSM-DT is a useful tool for identifying the DNA-binding proteins. A user-friendly web-server of SVM-PSSM-DT was constructed, which is freely accessible to the public at the web-site on http://bioinformatics.hitsz.edu.cn/PSSM-DT/.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we developed and improved the numerical mode matching (NMM) method which has previously been shown to be a fast and robust semi-analytical solver to investigate the propagation of electromagnetic (EM) waves in an isotropic layered medium. The applicable models, such as cylindrical waveguide, optical fiber, and borehole with earth geological formation, are generally modeled as an axisymmetric structure which is an orthogonal-plano-cylindrically layered (OPCL) medium consisting of materials stratified planarly and layered concentrically in the orthogonal directions.

In this report, several important improvements have been made to extend applications of this efficient solver to the anisotropic OCPL medium. The formulas for anisotropic media with three different diagonal elements in the cylindrical coordinate system are deduced to expand its application to more general materials. The perfectly matched layer (PML) is incorporated along the radial direction as an absorbing boundary condition (ABC) to make the NMM method more accurate and efficient for wave diffusion problems in unbounded media and applicable to scattering problems with lossless media. We manipulate the weak form of Maxwell's equations and impose the correct boundary conditions at the cylindrical axis to solve the singularity problem which is ignored by all previous researchers. The spectral element method (SEM) is introduced to more efficiently compute the eigenmodes of higher accuracy with less unknowns, achieving a faster mode matching procedure between different horizontal layers. We also prove the relationship of the field between opposite mode indices for different types of excitations, which can reduce the computational time by half. The formulas for computing EM fields excited by an electric or magnetic dipole located at any position with an arbitrary orientation are deduced. And the excitation are generalized to line and surface current sources which can extend the application of NMM to the simulations of controlled source electromagnetic techniques. Numerical simulations have demonstrated the efficiency and accuracy of this method.

Finally, the improved numerical mode matching (NMM) method is introduced to efficiently compute the electromagnetic response of the induction tool from orthogonal transverse hydraulic fractures in open or cased boreholes in hydrocarbon exploration. The hydraulic fracture is modeled as a slim circular disk which is symmetric with respect to the borehole axis and filled with electrically conductive or magnetic proppant. The NMM solver is first validated by comparing the normalized secondary field with experimental measurements and a commercial software. Then we analyze quantitatively the induction response sensitivity of the fracture with different parameters, such as length, conductivity and permeability of the filled proppant, to evaluate the effectiveness of the induction logging tool for fracture detection and mapping. Casings with different thicknesses, conductivities and permeabilities are modeled together with the fractures in boreholes to investigate their effects for fracture detection. It reveals that the normalized secondary field will not be weakened at low frequencies, ensuring the induction tool is still applicable for fracture detection, though the attenuation of electromagnetic field through the casing is significant. A hybrid approach combining the NMM method and BCGS-FFT solver based integral equation has been proposed to efficiently simulate the open or cased borehole with tilted fractures which is a non-axisymmetric model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.

This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.

The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new

individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the

refreshment sample itself. As we illustrate, nonignorable unit nonresponse

can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse

in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.

The second method incorporates informative prior beliefs about

marginal probabilities into Bayesian latent class models for categorical data.

The basic idea is to append synthetic observations to the original data such that

(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.

We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.

The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Immunohistochemistry (IHC) is the group of techniques that use antibodies as specific reagents to identify and demonstrate several cell and tissue components that are antigens. This linking allows locating and identifying the in situ presence of various substances by means of color that is associated with the formed antigen-antibody complexes. The practical value of this biotechnology area, widely used in Pathology and Oncology, in diagnostic, prognostic, theranostic and research context, results from the possibility of combining a colour marker with an antibody without causing any damage to specific binding established between antibody and antigen. This provides the microscopic observation of the target locations where the antibody and hence the antigen are present. IHC is presented as a powerful means for identification of several cellular and tissue structures that can be associated with pathologies, and of the consequences, at functional and morphological level, of these same elements action.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to trends in aero-design, aeroelasticity becomes increasingly important in modern turbomachines. Design requirements of turbomachines lead to the development of high aspect ratio blades and blade integral disc designs (blisks), which are especially prone to complex modes of vibration. Therefore, experimental investigations yielding high quality data are required for improving the understanding of aeroelastic effects in turbomachines. One possibility to achieve high quality data is to excite and measure blade vibrations in turbomachines. The major requirement for blade excitation and blade vibration measurements is to minimize interference with the aeroelastic effects to be investigated. Thus in this paper, a non-contact-and thus low interference-experimental set-up for exciting and measuring blade vibrations is proposed and shown to work. A novel acoustic system excites rotor blade vibrations, which are measured with an optical tip-timing system. By performing measurements in an axial compressor, the potential of the acoustic excitation method for investigating aeroelastic effects is explored. The basic principle of this method is described and proven through the analysis of blade responses at different acoustic excitation frequencies and at different rotational speeds. To verify the accuracy of the tip-timing system, amplitudes measured by tip-timing are compared with strain gage measurements. They are found to agree well. Two approaches to vary the nodal diameter (ND) of the excited vibration mode by controlling the acoustic excitation are presented. By combining the different excitable acoustic modes with a phase-lag control, each ND of the investigated 30 blade rotor can be excited individually. This feature of the present acoustic excitation system is of great benefit to aeroelastic investigations and represents one of the main advantages over other excitation methods proposed in the past. In future studies, the acoustic excitation method will be used to investigate aeroelastic effects in high-speed turbomachines in detail. The results of these investigations are to be used to improve the aeroelastic design of modern turbomachines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a pre-processing mesh re-distribution algorithm based upon harmonic maps employed in conjunction with discontinuous Galerkin approximations of advection-diffusion-reaction problems. Extensive two-dimensional numerical experiments with different choices of monitor functions, including monitor functions derived from goal-oriented a posteriori error indicators are presented. The examples presented clearly demonstrate the capabilities and the benefits of combining our pre-processing mesh movement algorithm with both uniform, as well as, adaptive isotropic and anisotropic mesh refinement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the presented thesis work, the meshfree method with distance fields was coupled with the lattice Boltzmann method to obtain solutions of fluid-structure interaction problems. The thesis work involved development and implementation of numerical algorithms, data structure, and software. Numerical and computational properties of the coupling algorithm combining the meshfree method with distance fields and the lattice Boltzmann method were investigated. Convergence and accuracy of the methodology was validated by analytical solutions. The research was focused on fluid-structure interaction solutions in complex, mesh-resistant domains as both the lattice Boltzmann method and the meshfree method with distance fields are particularly adept in these situations. Furthermore, the fluid solution provided by the lattice Boltzmann method is massively scalable, allowing extensive use of cutting edge parallel computing resources to accelerate this phase of the solution process. The meshfree method with distance fields allows for exact satisfaction of boundary conditions making it possible to exactly capture the effects of the fluid field on the solid structure.