932 resultados para Center Sets
Resumo:
Visiting a modern shopping center is becoming vital in our society nowadays. The fast growth of shopping center, transportation system, and modern vehicles has given more choices for consumers in shopping. Although there are many reasons for the consumers in visiting the shopping center, the influence of travel time and size of shopping center are important things to be considered towards the frequencies of visiting customers in shopping centers. A survey to the customers of three major shopping centers in Surabaya has been conducted to evaluate the Ellwood’s model and Huff’s model. A new exponent value N of 0.48 and n of 0.50 has been found from the Ellwood’s model, while a coefficient of 0.267 and an add value of 0.245 have been found from the Huff’s model.
Resumo:
This study assesses the recently proposed data-driven background dataset refinement technique for speaker verification using alternate SVM feature sets to the GMM supervector features for which it was originally designed. The performance improvements brought about in each trialled SVM configuration demonstrate the versatility of background dataset refinement. This work also extends on the originally proposed technique to exploit support vector coefficients as an impostor suitability metric in the data-driven selection process. Using support vector coefficients improved the performance of the refined datasets in the evaluation of unseen data. Further, attempts are made to exploit the differences in impostor example suitability measures from varying features spaces to provide added robustness.
Resumo:
We review all journal articles based on “PSED-type” research, i.e., longitudinal, empirical studies of large probability samples of on-going, business start-up efforts. We conclude that the research stream has yielded interesting findings; sometimes by confirming prior research with a less bias-prone methodology and at other times by challenging whether prior conclusions are valid for the early stages of venture development. Most importantly, the research has addressed new, process-related research questions that prior research has shunned or been unable to study in a rigorous manner. The research has revealed an enormous and fascinating variability in new venture creation that also makes it challenging to arrive at broadly valid generalizations. An analysis of the findings across studies as well as an examination of those studies that have been relatively more successful at explaining outcomes give good guidance regarding what is required in order to achieve strong and credible results. We compile and present such advice to users of existing data sets and designers of new projects in the following areas: Statistically representative and/or theoretically relevant sampling; Level of analysis issues; Dealing with process heterogeneity; Dealing with other heterogeneity issues, and Choice and interpretation of dependent variables.
Resumo:
Ocean processes are dynamic and complex events that occur on multiple different spatial and temporal scales. To obtain a synoptic view of such events, ocean scientists focus on the collection of long-term time series data sets. Generally, these time series measurements are continually provided in real or near-real time by fixed sensors, e.g., buoys and moorings. In recent years, an increase in the utilization of mobile sensor platforms, e.g., Autonomous Underwater Vehicles, has been seen to enable dynamic acquisition of time series data sets. However, these mobile assets are not utilized to their full capabilities, generally only performing repeated transects or user-defined patrolling loops. Here, we provide an extension to repeated patrolling of a designated area. Our algorithms provide the ability to adapt a standard mission to increase information gain in areas of greater scientific interest. By implementing a velocity control optimization along the predefined path, we are able to increase or decrease spatiotemporal sampling resolution to satisfy the sampling requirements necessary to properly resolve an oceanic phenomenon. We present a path planning algorithm that defines a sampling path, which is optimized for repeatability. This is followed by the derivation of a velocity controller that defines how the vehicle traverses the given path. The application of these tools is motivated by an ongoing research effort to understand the oceanic region off the coast of Los Angeles, California. The computed paths are implemented with the computed velocities onto autonomous vehicles for data collection during sea trials. Results from this data collection are presented and compared for analysis of the proposed technique.
Resumo:
This paper presents a novel two-stage information filtering model which combines the merits of term-based and pattern- based approaches to effectively filter sheer volume of information. In particular, the first filtering stage is supported by a novel rough analysis model which efficiently removes a large number of irrelevant documents, thereby addressing the overload problem. The second filtering stage is empowered by a semantically rich pattern taxonomy mining model which effectively fetches incoming documents according to the specific information needs of a user, thereby addressing the mismatch problem. The experiments have been conducted to compare the proposed two-stage filtering (T-SM) model with other possible "term-based + pattern-based" or "term-based + term-based" IF models. The results based on the RCV1 corpus show that the T-SM model significantly outperforms other types of "two-stage" IF models.
Resumo:
Analysis of either footprints or footwear impressions which have been recovered from a crime scene is a well known and well accepted part of forensic investigation. When this evidence is obtained by investigating officers, comparative analysis to a suspect’s evidence may be undertaken. This can be done either by the detectives or in some cases, podiatrists with experience in forensic analysis. Frequently asked questions of a podiatrist include; “What additional information should be collected from a suspect (for the purposes of comparison), and how should it be collected?” This paper explores the answers to these and related questions based on 20 years of practical experience in the field of crime scene analysis as it relates to podiatry and forensics. Elements of normal and abnormal foot function are explored and used to explain the high degree of variability in wear patterns produced by the interaction of the foot and footwear. Based on this understanding the potential for identifying unique features of the user and correlating this to footwear evidence becomes apparent. Standard protocols adopted by podiatrists allow for more precise, reliable, and valid results to be obtained from their analysis. Complex data sets are now being obtained by investigating officers and, in collaboration with the podiatrist; higher quality conclusions are being achieved. This presentation details the results of investigations which have used standard protocols to collect and analyse footwear and suspects of recent major crimes.
Resumo:
It is natural for those involved in entertainment to focus on the art. However, like any activity in even a free society, those involved in entertainment industries must operate within borders set by the law. This article examines the main areas of law that impact entertainment in an Australian context. It contrasts the position in relation to freedom of expression in Australia with that in the United States, which also promotes freedom of expression in a free society. It then briefly canvases the main limits on entertainment productions under Australian law.
Resumo:
This paper presents an extensive review on the services, six-sigma, and application of six-sigma in services. In order to improve service quality focus on service process is necessary. Six-sigma is a philosophy which also concentrates on the improvement of process. So, six-sigma if properly applied can be useful for services. This study focuses on the application aspect of six-sigma to wider range of services. The wider applicability of six-sigma depends on identification of key performance indicators(KPIs) for different types of service processes. A case study is conducted in call center services to identify, analyze and compare critical to quality characteristics (CTQs) and KPIs with other types of services available in literature. This study will be helpful to both practitioners and researchers.
Resumo:
The primary objective of the experiments reported here was to demonstrate the effects of opening up the design envelope for auditory alarms on the ability of people to learn the meanings of a set of alarms. Two sets of alarms were tested, one already extant and one newly-designed set for the same set of functions, designed according to a rationale set out by the authors aimed at increasing the heterogeneity of the alarm set and incorporating some well-established principles of alarm design. For both sets of alarms, a similarity-rating experiment was followed by a learning experiment. The results showed that the newly-designed set was judged to be more internally dissimilar, and easier to learn, than the extant set. The design rationale outlined in the paper is useful for design purposes in a variety of practical domains and shows how alarm designers, even at a relatively late stage in the design process, can improve the efficacy of an alarm set.
Resumo:
We address the problem of face recognition on video by employing the recently proposed probabilistic linear discrimi-nant analysis (PLDA). The PLDA has been shown to be robust against pose and expression in image-based face recognition. In this research, the method is extended and applied to video where image set to image set matching is performed. We investigate two approaches of computing similarities between image sets using the PLDA: the closest pair approach and the holistic sets approach. To better model face appearances in video, we also propose the heteroscedastic version of the PLDA which learns the within-class covariance of each individual separately. Our experi-ments on the VidTIMIT and Honda datasets show that the combination of the heteroscedastic PLDA and the closest pair approach achieves the best performance.
Resumo:
Foreword: In this paper I call upon a praxiological approach. Praxeology (early alteration of praxiology) is the study of human action and conduct. The name praxeology/praxiologyakes is root in praxis, Medieval Latin, from Greek, doing, action, from prassein to do, practice (Merriam-Webster Dictionary). Having been involved in project management education, research and practice for the last twenty years, I have constantly tried to improve and to provide a better understanding/knowledge of the field and related practice, and as a consequence widen and deepen the competencies of the people I was working with (and my own competencies as well!), assuming that better project management lead to more efficient and effective use of resources, development of people and at the end to a better world. For some time I have perceived a need to clarify the foundations of the discipline of project management, or at least elucidate what these foundations could be. An immodest task, one might say! But not a neutral one! I am constantly surprised by the way the world (i.e., organizations, universities, students and professional bodies) sees project management: as a set of methods, techniques, tools, interacting with others fields – general management, engineering, construction, information systems, etc. – bringing some effective ways of dealing with various sets of problems – from launching a new satellite to product development through to organizational change.