918 resultados para Small areas
Resumo:
The technological environment in which contemporary small and medium-sized enterprises (SMEs) operate can only be described as dynamic. The seemingly exponential nature of technological change, characterised by perceived increases in the benefits associated with various technologies, shortening product life cycles and changing standards, provides for the small and medium-sized enterprise a complex and challenging operational context. The development of infrastructures capable of supporting the Wireless Application Protocol (WAP)and associated 'wireless' applications represents the latest generation of technological innovation with potential appeal to SMEs and end-users alike. The primary aim of this research was to understand the mobile data technology needs of SMEs in a regional setting. The research was especially concerned with perceived needs across three market segments; non-adopters of new technology, partial-adopters of new technology and full-adopters of new technology. Working with an industry partner, focus groups were conducted with each of these segments with the discussions focused on the use of the latest WP products and services. Some of the results are presented in this paper.
Resumo:
This Chapter provides an overview of available corrent data measuring crime in Australia's States and Territories broken down into regions and localities The data is limited, has reliability problems and lots of gaps. Nevertheless when the data are analysed according to offence type (in particulary violence versus property offences) an interesting but complicated empirical picture emerges that departs from what most scholars and policy makes have commonly assumed about crime and rural communities - that there is not much of it! The chapter begins with an assessment of the uses and limitations of different ways of measuring crime for those interested in a spatialised analysis of crome dispersion in rural communities.
Resumo:
The standard Blanchard-Quah (BQ) decomposition forces aggregate demand and supply shocks to be orthogonal. However, this assumption is problematic for a nation with an inflation target. The very notion of inflation targeting means that monetary policy reacts to changes in aggregate supply. This paper employs a modification of the BQ procedure that allows for correlated shifts in aggregate supply and demand. It is found that shocks to Australian aggregate demand and supply are highly correlated. The estimated shifts in the aggregate demand and supply curves are then used to measure the effects of inflation targeting on the Australian inflation rate and level of GDP.
Resumo:
Objective: To investigate family members’ experiences of involvement in a previous study (conducted August 1995 to June 1997) following their child’s diagnosis with Ewing’s sarcoma. Design: Retrospective survey, conducted between 1 November and 30 November 1997, using a postal questionnaire. Participants: Eighty-one of 97 families who had previously completed an in-depth interview as part of a national case–control study of Ewing’s sarcoma. Main outcome measures: Participants’ views on how participation in the previous study had affected them and what motivated them to participate. Results: Most study participants indicated that taking part in the previous study had been a positive experience. Most (n = 79 [97.5%]) believed their involvement would benefit others and were glad to have participated, despite expecting and finding some parts of the interview to be painful. Parents whose child was still alive at the time of the interview recalled participation as more painful than those whose child had died before the interview. Parents who had completed the interview less than a year before our study recalled it as being more painful than those who had completed it more than a year before. Conclusions: That people suffering bereavement are generally eager to participate in research and may indeed find it a positive experience is useful information for members of ethics review boards and other “gatekeepers”, who frequently need to determine whether studies into sensitive areas should be approved. Such information may also help members of the community to make an informed decision regarding participation in such research.
Resumo:
Purpose: The purpose of this paper is to explore the role of cross-functional teams in the alignment between system effectiveness and operational effectiveness after the implementation of enterprise information systems (EIS). In addition, it aims to explore the contribution of cross-functional teams to improvement in operational performance. ---------- Design/methodology/approach: The research uses a combination of qualitative and quantitative methods, in a two-stage methodological approach, to investigate the influence of cross-functional teams on the alignment between system effectiveness and operational effectiveness and the impact of the stated alignment on the improvement in operational performance. ---------- Findings: Initial findings suggest that factors stemming from system effectiveness and the performance objectives stemming from operational effectiveness are important and significantly well correlated factors that promote the alignment between the effectiveness of technological implementation and the effectiveness of operations. In addition, confirmatory factor analysis has been used to find the structural relationships and provide explanations for the stated alignment and the contribution of cross-functional teams to the improvement in operational performance. ---------- Research limitations/implications: The principal limitation of this study is its small sample size. ---------- Practical implications: Cross-functional teams have been used by many organisations as a way of involving expertise from different functional areas in the implementation of innovative technologies. An appropriate use of the dimensions that emerged from this research, in the context of cross-functional teams, will assist organisations to properly utilise cross-functional teams with the aim of improving operational performance. ---------- Originality/value: The paper presents a new approach to measure the effectiveness of EIS implementation by adding new dimensions to measure it.
Resumo:
We review and discuss the literature on small firm growth with an intention to provide a useful vantage point for new research studies regarding this important phenomenon. We first discuss conceptual and methodological issues that represent critical choices for those who research growth and which make it challenging to compare results from previous studies. The substantial review of past research is organized into four sections representing two smaller and two larger literatures. The first of the latter focuses on internal and external drivers of small firm growth. Here we find that much has been learnt and that many valuable generalizations can be made. However, we also conclude that more research of the same kind is unlikely to yield much. While interactive and non-linear effects may be worth pursuing it is unlikely that any new and important growth drivers or strong, linear main effects would be found. The second large literature deals with organizational life-cycles or stages of development. While deservedly criticized for unwarranted determinism and weak empirics this type of approach addresses problems of high practical and also theoretical relevance, and should not be shunned by researchers. We argue that with a change in the fundamental assumptions and improved empirical design, research on the organizational and managerial consequences of growth is an important line of inquiry. With this, we overlap with one of the smaller literatures, namely studies focusing on the effects of growth. We argue that studies too often assume that growth equals success. We advocate instead the use of growth as an intermediary variable that influences more fundamental goals in ways that should be carefully examined rather than assumed. The second small literature distinguishes between different modes or forms of growth, including, e.g., organic vs. acquisition-based growth, and international expansion. We note that modes of growth is an important topic that has been under studied in the growth literature, whereas in other branches of research aspects of it may have been studied intensely, but not primarily from a growth perspective. In the final section we elaborate on ways forward for research on small firm growth. We point at rich opportunities for researchers who look beyond drivers of growth, where growth is viewed as a homogenous phenomenon assumed to unambiguously reflect success, and instead focus on growth as a process and a multi-dimensional phenomenon, as well as on how growth relates to more fundamental outcomes.
Resumo:
This literature review examines the relationship between traffic lane widths on the safety of road users. It focuses on the impacts of lane widths on motor vehicle behaviour and cyclists’ safety. The review commenced with a search of available databases. Peer reviewed articles and road authority reports were reviewed, as well as current engineering guidelines. Research shows that traffic lane width influences drivers’ perceived difficulty of the task, risk perception and possibly speed choices. Total roadway width, and the presence of onroad cycling facilities, influence cyclists’ positioning on the road. Lateral displacement between bicycles and vehicles is smallest when a marked bicycle facility is present. Reduced motor vehicle speeds can significantly improve the safety of vulnerable road users, particularly pedestrians and cyclists. It has been shown that if road lane widths on urban roads were reduced, through various mechanisms, it could result in a safety environment for all road users.
Resumo:
In this paper we describe a low-cost flight control system for a small (60 class) helicopter which is part of a larger project to develop an autonomous flying vehicle. Our approach differs from that of others in not using an expensive inertial/GPS sensing system. The primary sensors for vehicle stabilization are a low-cost inertial sensor and a pair of CMOS cameras. We describe the architecture of our flight control system, the inertial and visual sensing subsystems and present some flight control results.
Resumo:
This paper examines the algebraic cryptanalysis of small scale variants of the LEX-BES. LEX-BES is a stream cipher based on the Advanced Encryption Standard (AES) block cipher. LEX is a generic method proposed for constructing a stream cipher from a block cipher, initially introduced by Biryukov at eSTREAM, the ECRYPT Stream Cipher project in 2005. The Big Encryption System (BES) is a block cipher introduced at CRYPTO 2002 which facilitates the algebraic analysis of the AES block cipher. In this paper, experiments were conducted to find solution of the equation system describing small scale LEX-BES using Gröbner Basis computations. This follows a similar approach to the work by Cid, Murphy and Robshaw at FSE 2005 that investigated algebraic cryptanalysis on small scale variants of the BES. The difference between LEX-BES and BES is that due to the way the keystream is extracted, the number of unknowns in LEX-BES equations is fewer than the number in BES. As far as the author knows, this attempt is the first at creating solvable equation systems for stream ciphers based on the LEX method using Gröbner Basis computations.
Resumo:
RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.