973 resultados para Identification parameters


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a simple and intuitive approach to determining the kinematic parameters of a serial-link robot in Denavit– Hartenberg (DH) notation. Once a manipulator’s kinematics is parameterized in this form, a large body of standard algorithms and code implementations for kinematics, dynamics, motion planning, and simulation are available. The proposed method has two parts. The first is the “walk through,” a simple procedure that creates a string of elementary translations and rotations, from the user-defined base coordinate to the end-effector. The second step is an algebraic procedure to manipulate this string into a form that can be factorized as link transforms, which can be represented in standard or modified DH notation. The method allows for an arbitrary base and end-effector coordinate system as well as an arbitrary zero joint angle pose. The algebraic procedure is amenable to computer algebra manipulation and a Java program is available as supplementary downloadable material.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Service bundling can be regarded as an option for service providers to strengthen their competitive advantages, cope with dynamic market conditions and heterogeneous consumer demand. Despite these positive effects, actual guidance for the identification of service bundles and the act of bundling itself can be regarded as a gap. Previous research has resulted in a conceptualization of a service bundling method relying on a structured service description in order to fill this gap. This method addresses the reasoning about the suitability of services to be part of a bundle based on analyzing existing relationships between services captured by a description language. This paper extends the aforementioned research by presenting an initial set of empirically derived relationships between services in existing bundles that can subsequently be utilized to identify potential new bundles. Additionally, a gap analysis points out to what extent prominent ontologies and service description languages accommodate for the identified relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a resource constrained business world, strategic choices must be made on process improvement and service delivery. There are calls for more agile forms of enterprises and much effort is being directed at moving organizations from a complex landscape of disparate application systems to that of an integrated and flexible enterprise accessing complex systems landscapes through service oriented architecture (SOA). This paper describes the deconstruction of an enterprise into business services using value chain analysis as each element in the value chain can be rendered as a business service in the SOA. These business services are explicitly linked to the attainment of specific organizational strategies and their contribution to the attainment of strategy is assessed and recorded. This contribution is then used to provide a rank order of business service to strategy. This information facilitates executive decision making on which business service to develop into the SOA. The paper describes an application of this Critical Service Identification Methodology (CSIM) to a case study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Longitudinal data, where data are repeatedly observed or measured on a temporal basis of time or age provides the foundation of the analysis of processes which evolve over time, and these can be referred to as growth or trajectory models. One of the traditional ways of looking at growth models is to employ either linear or polynomial functional forms to model trajectory shape, and account for variation around an overall mean trend with the inclusion of random eects or individual variation on the functional shape parameters. The identification of distinct subgroups or sub-classes (latent classes) within these trajectory models which are not based on some pre-existing individual classification provides an important methodology with substantive implications. The identification of subgroups or classes has a wide application in the medical arena where responder/non-responder identification based on distinctly diering trajectories delivers further information for clinical processes. This thesis develops Bayesian statistical models and techniques for the identification of subgroups in the analysis of longitudinal data where the number of time intervals is limited. These models are then applied to a single case study which investigates the neuropsychological cognition for early stage breast cancer patients undergoing adjuvant chemotherapy treatment from the Cognition in Breast Cancer Study undertaken by the Wesley Research Institute of Brisbane, Queensland. Alternative formulations to the linear or polynomial approach are taken which use piecewise linear models with a single turning point, change-point or knot at a known time point and latent basis models for the non-linear trajectories found for the verbal memory domain of cognitive function before and after chemotherapy treatment. Hierarchical Bayesian random eects models are used as a starting point for the latent class modelling process and are extended with the incorporation of covariates in the trajectory profiles and as predictors of class membership. The Bayesian latent basis models enable the degree of recovery post-chemotherapy to be estimated for short and long-term followup occasions, and the distinct class trajectories assist in the identification of breast cancer patients who maybe at risk of long-term verbal memory impairment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Police work tasks are diverse and require the ability to take command, demonstrate leadership, make serious decisions and be self directed (Beck, 1999; Brunetto & Farr-Wharton, 2002; Howard, Donofrio & Boles, 2002). This work is usually performed in pairs or sometimes by an officer working alone. Operational police work is seldom performed under the watchful eyes of a supervisor and a great amount of reliance is placed on the high levels of motivation and professionalism of individual officers. Research has shown that highly motivated workers produce better outcomes (Whisenand & Rush, 1998; Herzberg, 2003). It is therefore important that Queensland police officers are highly motivated to provide a quality service to the Queensland community. This research aims to identify factors which motivate Queensland police to perform quality work. Researchers acknowledge that there is a lack of research and knowledge in regard to the factors which motivate police (Beck, 1999; Bragg, 1998; Howard, Donofrio & Boles, 2002; McHugh & Verner, 1998). The motivational factors were identified in regard to the demographic variables of; age, sex, rank, tenure and education. The model for this research is Herzberg’s two-factor theory of workplace motivation (1959). Herzberg found that there are two broad types of workplace motivational factors; those driven by a need to prevent loss or harm and those driven by a need to gain personal satisfaction or achievement. His study identified 16 basic sub-factors that operate in the workplace. The research utilised a questionnaire instrument based on the sub-factors identified by Herzberg (1959). The questionnaire format consists of an initial section which sought demographic information about the participant and is followed by 51 Likert scale questions. The instrument is an expanded version of an instrument previously used in doctoral studies to identify sources of police motivation (Holden, 1980; Chiou, 2004). The questionnaire was forwarded to approximately 960 police in the Brisbane, Metropolitan North Region. The data were analysed using Factor Analysis, MANOVAs, ANOVAs and multiple regression analysis to identify the key sources of police motivation and to determine the relationships between demographic variables such as: age, rank, educational level, tenure, generation cohort and motivational factors. A total of 484 officers responded to the questionnaire from the sample population of 960. Factor analysis revealed five broad Prime Motivational Factors that motivate police in their work. The Prime Motivational Factors are: Feeling Valued, Achievement, Workplace Relationships, the Work Itself and Pay and Conditions. The factor Feeling Valued highlighted the importance of positive supportive leaders in motivating officers. Many officers commented that supervisors who only provided negative feedback diminished their sense of feeling valued and were a key source of de-motivation. Officers also frequently commented that they were motivated by operational police work itself whilst demonstrating a strong sense of identity with their team and colleagues. The study showed a general need for acceptance by peers and an idealistic motivation to assist members of the community in need and protect victims of crime. Generational cohorts were not found to exert a significant influence on police motivation. The demographic variable with the single greatest influence on police motivation was tenure. Motivation levels were found to drop dramatically during the first two years of an officer’s service and generally not improve significantly until near retirement age. The findings of this research provide the foundation of a number of recommendations in regard to police retirement, training and work allocation that are aimed to improve police motivation levels. The five Prime Motivational Factor model developed in this study is recommended for use as a planning tool by police leaders to improve motivational and job-satisfaction components of police Service policies. The findings of this study also provide a better understanding of the current sources of police motivation. They are expected to have valuable application for Queensland police human resource management when considering policies and procedures in the areas of motivation, stress reduction and attracting suitable staff to specific areas of responsibility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Accurate estimation of input parameters is essential to ensure the accuracy and reliability of hydrologic and water quality modelling. Calibration is an approach to obtain accurate input parameters for comparing observed and simulated results. However, the calibration approach is limited as it is only applicable to catchments where monitoring data is available. Therefore, methodology to estimate appropriate model input parameters is critical, particularly for catchments where monitoring data is not available. In the research study discussed in the paper, pollutant build-up parameters derived from catchment field investigations and model calibration using MIKE URBAN are compared for three catchments in Southeast Queensland, Australia. Additionally, the sensitivity of MIKE URBAN input parameters was analysed. It was found that Reduction Factor is the most sensitive parameter for peak flow and total runoff volume estimation whilst Build-up rate is the most sensitive parameter for TSS load estimation. Consequently, these input parameters should be determined accurately in hydrologic and water quality simulations using MIKE URBAN. Furthermore, an empirical equation for Southeast Queensland, Australia for the conversion of build-up parameters derived from catchment field investigations as MIKE URBAN input build-up parameters was derived. This will provide guidance for allowing for regional variations in the estimation of input parameters for catchment modelling using MIKE URBAN where monitoring data is not available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.