346 resultados para Quality levels


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The OECD suggests that countries now have a choice. They can focus on development based on either:  competition via investment in technology and innovation - which is important in high knowledge industries and high innovation economies, or  competition via exchange rates and wages - which is important in industries producing standardised, lower-tech goods and services. The first route will maximise higher-skilled, higher-paid employment growth and living standards. Given the lack of control over the exchange rate, the second route requires competition based on wages. It is essential to understand that markets themselves won’t shift a country from one path to the other. These conclusions arise from the OECD’s recognition that technical progress - the creation of new products or the adoption of more efficient methods of production - is the main source of economic growth and enhanced quality of life. Technological change is, the OECD suggests, ...also the engine for job creation as higher wages and profits resulting from technology-induced productivity gains and lower prices lead to increased demand for new products from existing as well as new industries (1997: 4).Further, Competitiveness in high-technology industries is mainly driven by technology factors and much less by wage and exchange rate movements, while the reverse is true in low-technology industries (OECD 1996e: 12). The OECD has shown that sound macroeconomic conditions, such as the low inflation and reduced public sector debt visible in almost all member countries in the 1990s, are not enough to deal with high levels of unemployment and the need to increase levels of income: If economic performance is to improve, additional structural reform, which can increase innovation and the diffusion of technologies within and among national economies, seems necessary (OECD 1997: 4 Emphasis added).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A significant proportion of the cost of software development is due to software testing and maintenance. This is in part the result of the inevitable imperfections due to human error, lack of quality during the design and coding of software, and the increasing need to reduce faults to improve customer satisfaction in a competitive marketplace. Given the cost and importance of removing errors improvements in fault detection and removal can be of significant benefit. The earlier in the development process faults can be found, the less it costs to correct them and the less likely other faults are to develop. This research aims to make the testing process more efficient and effective by identifying those software modules most likely to contain faults, allowing testing efforts to be carefully targeted. This is done with the use of machine learning algorithms which use examples of fault prone and not fault prone modules to develop predictive models of quality. In order to learn the numerical mapping between module and classification, a module is represented in terms of software metrics. A difficulty in this sort of problem is sourcing software engineering data of adequate quality. In this work, data is obtained from two sources, the NASA Metrics Data Program, and the open source Eclipse project. Feature selection before learning is applied, and in this area a number of different feature selection methods are applied to find which work best. Two machine learning algorithms are applied to the data - Naive Bayes and the Support Vector Machine - and predictive results are compared to those of previous efforts and found to be superior on selected data sets and comparable on others. In addition, a new classification method is proposed, Rank Sum, in which a ranking abstraction is laid over bin densities for each class, and a classification is determined based on the sum of ranks over features. A novel extension of this method is also described based on an observed polarising of points by class when rank sum is applied to training data to convert it into 2D rank sum space. SVM is applied to this transformed data to produce models the parameters of which can be set according to trade-off curves to obtain a particular performance trade-off.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The tear film plays an important role preserving the health of the ocular surface and maintaining the optimal refractive power of the cornea. Moreover dry eye syndrome is one of the most commonly reported eye health problems. This syndrome is caused by abnormalities in the properties of the tear film. Current clinical tools to assess the tear film properties have shown certain limitations. The traditional invasive methods for the assessment of tear film quality, which are used by most clinicians, have been criticized for the lack of reliability and/or repeatability. A range of non-invasive methods of tear assessment have been investigated, but also present limitations. Hence no “gold standard” test is currently available to assess the tear film integrity. Therefore, improving techniques for the assessment of the tear film quality is of clinical significance and the main motivation for the work described in this thesis. In this study the tear film surface quality (TFSQ) changes were investigated by means of high-speed videokeratoscopy (HSV). In this technique, a set of concentric rings formed in an illuminated cone or a bowl is projected on the anterior cornea and their reflection from the ocular surface imaged on a charge-coupled device (CCD). The reflection of the light is produced in the outer most layer of the cornea, the tear film. Hence, when the tear film is smooth the reflected image presents a well structure pattern. In contrast, when the tear film surface presents irregularities, the pattern also becomes irregular due to the light scatter and deviation of the reflected light. The videokeratoscope provides an estimate of the corneal topography associated with each Placido disk image. Topographical estimates, which have been used in the past to quantify tear film changes, may not always be suitable for the evaluation of all the dynamic phases of the tear film. However the Placido disk image itself, which contains the reflected pattern, may be more appropriate to assess the tear film dynamics. A set of novel routines have been purposely developed to quantify the changes of the reflected pattern and to extract a time series estimate of the TFSQ from the video recording. The routine extracts from each frame of the video recording a maximized area of analysis. In this area a metric of the TFSQ is calculated. Initially two metrics based on the Gabor filter and Gaussian gradient-based techniques, were used to quantify the consistency of the pattern’s local orientation as a metric of TFSQ. These metrics have helped to demonstrate the applicability of HSV to assess the tear film, and the influence of contact lens wear on TFSQ. The results suggest that the dynamic-area analysis method of HSV was able to distinguish and quantify the subtle, but systematic degradation of tear film surface quality in the inter-blink interval in contact lens wear. It was also able to clearly show a difference between bare eye and contact lens wearing conditions. Thus, the HSV method appears to be a useful technique for quantitatively investigating the effects of contact lens wear on the TFSQ. Subsequently a larger clinical study was conducted to perform a comparison between HSV and two other non-invasive techniques, lateral shearing interferometry (LSI) and dynamic wavefront sensing (DWS). Of these non-invasive techniques, the HSV appeared to be the most precise method for measuring TFSQ, by virtue of its lower coefficient of variation. While the LSI appears to be the most sensitive method for analyzing the tear build-up time (TBUT). The capability of each of the non-invasive methods to discriminate dry eye from normal subjects was also investigated. The receiver operating characteristic (ROC) curves were calculated to assess the ability of each method to predict dry eye syndrome. The LSI technique gave the best results under both natural blinking conditions and in suppressed blinking conditions, which was closely followed by HSV. The DWS did not perform as well as LSI or HSV. The main limitation of the HSV technique, which was identified during the former clinical study, was the lack of the sensitivity to quantify the build-up/formation phase of the tear film cycle. For that reason an extra metric based on image transformation and block processing was proposed. In this metric, the area of analysis was transformed from Cartesian to Polar coordinates, converting the concentric circles pattern into a quasi-straight lines image in which a block statistics value was extracted. This metric has shown better sensitivity under low pattern disturbance as well as has improved the performance of the ROC curves. Additionally a theoretical study, based on ray-tracing techniques and topographical models of the tear film, was proposed to fully comprehend the HSV measurement and the instrument’s potential limitations. Of special interested was the assessment of the instrument’s sensitivity under subtle topographic changes. The theoretical simulations have helped to provide some understanding on the tear film dynamics, for instance the model extracted for the build-up phase has helped to provide some insight into the dynamics during this initial phase. Finally some aspects of the mathematical modeling of TFSQ time series have been reported in this thesis. Over the years, different functions have been used to model the time series as well as to extract the key clinical parameters (i.e., timing). Unfortunately those techniques to model the tear film time series do not simultaneously consider the underlying physiological mechanism and the parameter extraction methods. A set of guidelines are proposed to meet both criteria. Special attention was given to a commonly used fit, the polynomial function, and considerations to select the appropriate model order to ensure the true derivative of the signal is accurately represented. The work described in this thesis has shown the potential of using high-speed videokeratoscopy to assess tear film surface quality. A set of novel image and signal processing techniques have been proposed to quantify different aspects of the tear film assessment, analysis and modeling. The dynamic-area HSV has shown good performance in a broad range of conditions (i.e., contact lens, normal and dry eye subjects). As a result, this technique could be a useful clinical tool to assess tear film surface quality in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Information and communication technologies (ICTs) are essential components of the knowledge economy, and have an immense complementary role in innovation, education, knowledge creation, and relations with government, civil society, and business within city regions. The ability to create, distribute, and exploit knowledge has become a major source of competitive advantage, wealth creation, and improvements in the new regional policies. Growing impact of ICTs on the economy and society, rapid application of recent scientific advances in new products and processes, shifting to more knowledge-intensive industry and services, and rising skill requirements have become crucial concepts for urban and regional competitiveness. Therefore, harnessing ICTs for knowledge-based urban development (KBUD) has a significant impact on urban and regional growth (Yigitcanlar, 2005). In this sense, e-region is a novel concept utilizing ICTs for regional development. Since the Helsinki European Council announced Turkey as a candidate for European Union (EU) membership in 1999, the candidacy has accelerated the speed of regional policy enhancements and adoption of the European regional policy standards. These enhancements and adoption include the generation of a new regional spatial division, NUTS-II statistical regions; a new legislation on the establishment of regional development agencies (RDAs); and new orientations in the field of high education, science, and technology within the framework of the EU’s Lisbon Strategy and the Bologna Process. The European standards posed an ambitious new agenda in the development and application of contemporary regional policy in Turkey (Bilen, 2005). In this sense, novel regional policies in Turkey necessarily endeavor to include information society objectives through efficient use of new technologies such as ICTs. Such a development seeks to be based on tangible assets of the region (Friedmann, 2006) as well as the best practices deriving from grounding initiatives on urban and local levels. These assets provide the foundation of an e-region that harnesses regional development in an information society context. With successful implementations, the Marmara region’s local governments in Turkey are setting the benchmark for the country in the implementation of spatial information systems and e-governance, and moving toward an e-region. Therefore, this article aims to shed light on organizational and regional realities of recent practices of ICT applications and their supply instruments based on evidence from selected local government organizations in the Marmara region. This article also exemplifies challenges and opportunities of the region in moving toward an e-region and provides a concise review of different ICT applications and strategies in a broader urban and regional context. The article is organized in three parts. The following section scrutinizes the e-region framework and the role of ICTs in regional development. Then, Marmara’s opportunities and challenges in moving toward an e-region are discussed in the context of ICT applications and their supply instruments based on public-sector projects, policies, and initiatives. Subsequently, the last section discusses conclusions and prospective research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many governments world wide are attempting to increase accountability, transparency, and the quality of services by adopting information and communications technologies (ICTs) to modernize and change the way their administrations work. Meanwhile e-government is becoming a significant decision-making and service tool at local, regional and national government levels. The vast majority of users of these government online services see significant benefits from being able to access services online. The rapid pace of technological development has created increasingly more powerful ICTs that are capable of radically transforming public institutions and private organizations alike. These technologies have proven to be extraordinarily useful instruments in enabling governments to enhance the quality, speed of delivery and reliability of services to citizens and to business (VanderMeer & VanWinden, 2003). However, just because the technology is available does not mean it is accessible to all. The term digital divide has been used since the 1990s to describe patterns of unequal access to ICTs—primarily computers and the Internet—based on income, ethnicity, geography, age, and other factors. Over time it has evolved to more broadly define disparities in technology usage, resulting from a lack of access, skills, or interest in using technology. This article provides an overview of recent literature on e-government and the digital divide, and includes a discussion on the potential of e-government in addressing the digital divide.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multilevel converters are used in high power and high voltage applications due to their attractive benefits in generating high quality output voltage. Increasing the number of voltage levels can lead to a reduction in lower order harmonics. Various modulation and control techniques are introduced for multilevel converters like Space Vector Modulation (SVM), Sinusoidal Pulse Width Modulation (SPWM) and Harmonic Elimination (HE) methods. Multilevel converters may have a DC link with equal or unequal DC voltages. In this paper a new modulation technique based on harmonic elimination method is proposed for those multilevel converters that have unequal DC link voltages. This new technique has better effect on output voltage quality and less Total Harmonic Distortion (THD) than other modulation techniques. In order to verify the proposed modulation technique, MATLAB simulations are carried out for a single-phase diode-clamped inverter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

CRE (Corporate Real Estate) decisions should not simply deal with the management of individual facilities, but should especially be concerned with the relationships that a facility has with the corporate business strategy and with the larger real estate markets. Both the practice and the research of CRE management have historically tended to emphasize real estate issues and ignore the corporation’s business issues, causing real estate strategies to be disconnected from the goal and priorities of the corporation’s senior management. With regard to office cycles, a large number of econometric models have been proposed during the last 20 years. However, evidence from historical data and previous research in the field of real estate forecasting seem to agree only on one thing: the existence of interconnected property cycles that are concentrated on vacancy rates (demand). Vacancy also represents the linkage between the inadequacy of existing CRE strategies and the inability of existing econometric models to correctly forecast office rent cycles. Business cycles, across different industry sectors, have decreased from 5-7 years to 1-3 years today, yet corporations are still entering into leases of 5-10 years, causing hidden vacancy levels to rise. Possibly, once CRE strategies are totally in tune with the overall business, hidden vacancy will fade away providing forecasters with better quality data. The aim of this paper is not to investigate whether and when the supply-side will eventually evolve to provide flexible occupancy arrangements to accommodate corporate agility requirements, but rather to propose a general framework for corporations to improve the decision making process of their CRE executives, while emphasizing the importance of understanding the context as a precondition to effective real estate involvements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. Surgical treatment of scoliosis is assessed in the spine clinic by the surgeon making numerous measurements on X-Rays as well as the rib hump. But it is important to understand which of these measures correlate with self-reported improvements in patients’ quality of life following surgery. The objective of this study was to examine the relationship between patient satisfaction after thoracoscopic (keyhole) anterior scoliosis surgery and standard deformity correction measures using the Scoliosis Research Society (SRS) adolescent questionnaire. Methods. A series of 100 consecutive adolescent idiopathic scoliosis patients received a single anterior rod via a keyhole approach at the Mater Children’s Hospital, Brisbane. Patients completed SRS outcomes questionnaires before surgery and again at 24 months after surgery. Multiple regression and t-tests were used to investigate the relationship between SRS scores and deformity correction achieved after surgery. Results. There were 94 females and 6 males with a mean age of 16.1 years. The mean Cobb angle improved from 52º pre-operatively to 21º for the instrumented levels post-operatively (59% correction) and the mean rib hump improved from 16º to 8º (51% correction). The mean total SRS score for the cohort was 99.4/120 which indicated a high level of satisfaction with the results of their scoliosis surgery. None of the deformity related parameters in the multiple regressions were significant. However, the twenty patients with the smallest Cobb angles after surgery reported significantly higher SRS scores than the twenty patients with the largest Cobb angles after surgery, but there was no difference on the basis of rib hump correction. Discussion. Patients undergoing thoracoscopic (keyhole) anterior scoliosis correction report good SRS scores which are comparable to those in previous studies. We suggest that the absence of any statistically significant difference in SRS scores between patients with and without rod or screw complications is because these complications are not associated with any clinically significant loss of correction in our patient group. The Cobb angle after surgery was the only significant predictor of patient satisfaction when comparing subgroups of patients with the largest and smallest Cobb angles after surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspection of solder joints has been a critical process in the electronic manufacturing industry to reduce manufacturing cost, improve yield, and ensure product quality and reliability. This paper proposes two inspection modules for an automatic solder joint classification system. The “front-end” inspection system includes illumination normalisation, localisation and segmentation. The “back-end” inspection involves the classification of solder joints using the Log Gabor filter and classifier fusion. Five different levels of solder quality with respect to the amount of solder paste have been defined. The Log Gabor filter has been demonstrated to achieve high recognition rates and is resistant to misalignment. This proposed system does not need any special illumination system, and the images are acquired by an ordinary digital camera. This system could contribute to the development of automated non-contact, non-destructive and low cost solder joint quality inspection systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia. Conclusion We describe an Australian origin for B. pseudomallei, characterized by a single introduction event into Southeast Asia during a recent glacial period, and variable levels of lateral gene transfer within populations. These patterns provide insights into mechanisms of genetic diversification in B. pseudomallei and its closest relatives, and provide a framework for integrating the traditionally separate fields of population genetics and phylogenetics for other bacterial species with high levels of lateral gene transfer.