920 resultados para Partial data fusion
Resumo:
PART I:Cross-section uncertainties under differentneutron spectra. PART II: Processing uncertainty libraries
Resumo:
When applying multivariate analysis techniques in information systems and social science disciplines, such as management information systems (MIS) and marketing, the assumption that the empirical data originate from a single homogeneous population is often unrealistic. When applying a causal modeling approach, such as partial least squares (PLS) path modeling, segmentation is a key issue in coping with the problem of heterogeneity in estimated cause-and-effect relationships. This chapter presents a new PLS path modeling approach which classifies units on the basis of the heterogeneity of the estimates in the inner model. If unobserved heterogeneity significantly affects the estimated path model relationships on the aggregate data level, the methodology will allow homogenous groups of observations to be created that exhibit distinctive path model estimates. The approach will, thus, provide differentiated analytical outcomes that permit more precise interpretations of each segment formed. An application on a large data set in an example of the American customer satisfaction index (ACSI) substantiates the methodology’s effectiveness in evaluating PLS path modeling results.
Resumo:
When we study the variables that a ffect survival time, we usually estimate their eff ects by the Cox regression model. In biomedical research, e ffects of the covariates are often modi ed by a biomarker variable. This leads to covariates-biomarker interactions. Here biomarker is an objective measurement of the patient characteristics at baseline. Liu et al. (2015) has built up a local partial likelihood bootstrap model to estimate and test this interaction e ffect of covariates and biomarker, but the R code developed by Liu et al. (2015) can only handle one variable and one interaction term and can not t the model with adjustment to nuisance variables. In this project, we expand the model to allow adjustment to nuisance variables, expand the R code to take any chosen interaction terms, and we set up many parameters for users to customize their research. We also build up an R package called "lplb" to integrate the complex computations into a simple interface. We conduct numerical simulation to show that the new method has excellent fi nite sample properties under both the null and alternative hypothesis. We also applied the method to analyze data from a prostate cancer clinical trial with acid phosphatase (AP) biomarker.
Resumo:
With the advent of Service Oriented Architecture, Web Services have gained tremendous popularity. Due to the availability of a large number of Web services, finding an appropriate Web service according to the requirement of the user is a challenge. This warrants the need to establish an effective and reliable process of Web service discovery. A considerable body of research has emerged to develop methods to improve the accuracy of Web service discovery to match the best service. The process of Web service discovery results in suggesting many individual services that partially fulfil the user’s interest. By considering the semantic relationships of words used in describing the services as well as the use of input and output parameters can lead to accurate Web service discovery. Appropriate linking of individual matched services should fully satisfy the requirements which the user is looking for. This research proposes to integrate a semantic model and a data mining technique to enhance the accuracy of Web service discovery. A novel three-phase Web service discovery methodology has been proposed. The first phase performs match-making to find semantically similar Web services for a user query. In order to perform semantic analysis on the content present in the Web service description language document, the support-based latent semantic kernel is constructed using an innovative concept of binning and merging on the large quantity of text documents covering diverse areas of domain of knowledge. The use of a generic latent semantic kernel constructed with a large number of terms helps to find the hidden meaning of the query terms which otherwise could not be found. Sometimes a single Web service is unable to fully satisfy the requirement of the user. In such cases, a composition of multiple inter-related Web services is presented to the user. The task of checking the possibility of linking multiple Web services is done in the second phase. Once the feasibility of linking Web services is checked, the objective is to provide the user with the best composition of Web services. In the link analysis phase, the Web services are modelled as nodes of a graph and an allpair shortest-path algorithm is applied to find the optimum path at the minimum cost for traversal. The third phase which is the system integration, integrates the results from the preceding two phases by using an original fusion algorithm in the fusion engine. Finally, the recommendation engine which is an integral part of the system integration phase makes the final recommendations including individual and composite Web services to the user. In order to evaluate the performance of the proposed method, extensive experimentation has been performed. Results of the proposed support-based semantic kernel method of Web service discovery are compared with the results of the standard keyword-based information-retrieval method and a clustering-based machine-learning method of Web service discovery. The proposed method outperforms both information-retrieval and machine-learning based methods. Experimental results and statistical analysis also show that the best Web services compositions are obtained by considering 10 to 15 Web services that are found in phase-I for linking. Empirical results also ascertain that the fusion engine boosts the accuracy of Web service discovery by combining the inputs from both the semantic analysis (phase-I) and the link analysis (phase-II) in a systematic fashion. Overall, the accuracy of Web service discovery with the proposed method shows a significant improvement over traditional discovery methods.
Resumo:
A bioactive and bioresorbable scaffold fabricated from medical grade poly (epsilon-caprolactone) and incorporating 20% beta-tricalcium phosphate (mPCL–TCP) was recently developed for bone regeneration at load bearing sites. In the present study, we aimed to evaluate bone ingrowth into mPCL–TCP in a large animal model of lumbar interbody fusion. Six pigs underwent a 2-level (L3/4; L5/6) anterior lumbar interbody fusion (ALIF) implanted with mPCL–TCP þ 0.6 mg rhBMP-2 as treatment group while four other pigs implanted with autogenous bone graft served as control. Computed tomographic scanning and histology revealed complete defect bridging in all (100%) specimen from the treatment group as early as 3 months. Histological evidence of continuing bone remodeling and maturation was observed at 6 months. In the control group, only partial bridging was observed at 3 months and only 50% of segments in this group showed complete defect bridging at 6 months. Furthermore, 25% of segments in the control group showed evidence of graft fracture, resorption and pseudoarthrosis. In contrast, no evidence of graft fractures, pseudoarthrosis or foreign body reaction was observed in the treatment group. These results reveal that mPCL–TCP scaffolds could act as bone graft substitutes by providing a suitable environment for bone regeneration in a dynamic load bearing setting such as in a porcine model of interbody spine fusion.
Resumo:
By using the Rasch model, much detailed diagnostic information is available to developers of survey and assessment instruments and to the researchers who use them. We outline an approach to the analysis of data obtained from the administration of survey instruments that can enable researchers to recognise and diagnose difficulties with those instruments and then to suggest remedial actions that can improve the measurement properties of the scales included in questionnaires. We illustrate the approach using examples drawn from recent research and demonstrate how the approach can be used to generate figures that make the results of Rasch analyses accessible to non-specialists.
Resumo:
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.
Resumo:
Information fusion in biometrics has received considerable attention. The architecture proposed here is based on the sequential integration of multi-instance and multi-sample fusion schemes. This method is analytically shown to improve the performance and allow a controlled trade-off between false alarms and false rejects when the classifier decisions are statistically independent. Equations developed for detection error rates are experimentally evaluated by considering the proposed architecture for text dependent speaker verification using HMM based digit dependent speaker models. The tuning of parameters, n classifiers and m attempts/samples, is investigated and the resultant detection error trade-off performance is evaluated on individual digits. Results show that performance improvement can be achieved even for weaker classifiers (FRR-19.6%, FAR-16.7%). The architectures investigated apply to speaker verification from spoken digit strings such as credit card numbers in telephone or VOIP or internet based applications.
Resumo:
The technological environment in which contemporary small and medium-sized enterprises (SMEs) operate can only be described as dynamic. The exponential rate of technological change, characterised by perceived increases in the benefits associated with various technologies, shortening product life cycles and changing standards, provides the SME a complex and challenging operational context. The primary aim of this research was to identify the needs of SMEs in regional areas for mobile data technologies (MDT). In this study a distinction was drawn between those respondents who were full-adopters of technology, those who were partial-adopters and those who were non-adopters and these three segments articulated different needs and requirements for MDT. Overall the needs of regional SMEs for MDT can be conceptualised into three areas where the technology will assist business practices, communication, e-commerce and security.
Resumo:
The seemingly exponential nature of technological change provides SMEs with a complex and challenging operational context. The development of infrastructures capable of supporting the wireless application protocol (WAP) and associated 'wireless' applications represents the latest generation of technological innovation with potential appeals to SMEs and end-users alike. This paper aims to understand the mobile data technology needs of SMEs in a regional setting. The research was especially concerned with perceived needs across three market segments : non-adopters, partial-adopters and full-adopters of new technology. The research was exploratory in nature as the phenomenon under scrutiny is relatively new and the uses unclear, thus focus groups were conducted with each of the segments. The paper provides insights for business, industry and academics.
Resumo:
The technological environment in which contemporary small- and medium-sized enterprises (SMEs) operate can only be described as dynamic. The exponential rate of technological change, characterised by perceived increases in the benefits associated with various technologies, shortening product life cycles and changing standards, provides for the SME a complex and challenging operational context. The primary aim of this research was to identify the needs of SMEs in regional areas for mobile data technologies (MDT). In this study a distinction was drawn between those respondents who were full-adopters of technology, those who were partial-adopters, and those who were non-adopters and these three segments articulated different needs and requirements for MDT. Overall, the needs of regional SMEs for MDT can be conceptualised into three areas where the technology will assist business practices; communication, e-commerce and security
Resumo:
The technological environment in which contemporary small and medium-sized enterprises (SMEs) operate can only be described as dynamic. The seemingly exponential nature of technological change, characterised by perceived increases in the benefits associated with various technologies, shortening product life cycles and changing standards, provides for the small and medium-sized enterprise a complex and challenging operational context. The development of infrastructures capable of supporting the Wireless Application Protocol (WAP)and associated 'wireless' applications represents the latest generation of technological innovation with potential appeal to SMEs and end-users alike. The primary aim of this research was to understand the mobile data technology needs of SMEs in a regional setting. The research was especially concerned with perceived needs across three market segments; non-adopters of new technology, partial-adopters of new technology and full-adopters of new technology. Working with an industry partner, focus groups were conducted with each of these segments with the discussions focused on the use of the latest WP products and services. Some of the results are presented in this paper.