972 resultados para Quality tools
Resumo:
The performance of a pavement depends on the quality of its subgrade and subbase layers; these foundational layers play a key role in mitigating the effects of climate and the stresses generated by traffic. Therefore, building a stable subgrade and a properly drained subbase is vital for constructing an effective and long lasting pavement system. This manual has been developed to help Iowa highway engineers improve the design, construction, and testing of a pavement system’s subgrade and subbase layers, thereby extending pavement life. The manual synthesizes current and previous research conducted in Iowa and other states into a practical geotechnical design guide [proposed as Chapter 6 of the Statewide Urban Design and Specifications (SUDAS) Design Manual] and construction specifications (proposed as Section 2010 of the SUDAS Standard Specifications) for subgrades and subbases. Topics covered include the important characteristics of Iowa soils, the key parameters and field properties of optimum foundations, embankment construction, geotechnical treatments, drainage systems, and field testing tools, among others.
Resumo:
OBJECTIVE: To assess the association between socio-demographic factors and the quality of preventive care and chronic care of cardiovascular (CV) risk factors in a country with universal health care coverage. METHODS: Our retrospective cohort assessed a random sample of 966 patients aged 50-80years followed over 2years (2005-2006) in 4 Swiss university primary care settings (Basel/Geneva/Lausanne/Zürich). We used RAND's Quality Assessment Tools indicators and examined recommended preventive care among different socio-demographic subgroups. RESULTS: Overall patients received 69.6% of recommended preventive care. Preventive care indicators were more likely to be met among men (72.8% vs. 65.4%; p<0.001), younger patients (from 71.0% at 50-59years to 66.7% at 70-80years, p for trend=0.03) and Swiss patients (71.1% vs. 62.7% in forced migrants; p=0.001). This latter difference remained in multivariate analysis adjusted for gender, age, civil status and occupation (OR 0.68; 95% CI 0.54-0.86). Forced migrants had lower scores for physical examination and breast and colon cancer screening (all p≤0.02). No major differences were seen for chronic care of CV risk factors. CONCLUSION: Despite universal healthcare coverage, forced migrants receive less preventive care than Swiss patients in university primary care settings. Greater attention should be paid to forced migrants for preventive care.
Resumo:
Peer-reviewed
Resumo:
Modern multimedia communication tools must have high security, high availability and high quality of service (QoS). Any security implementation will directly impact on QoS. This paper will investigate how end-to-end security impacts on QoS in Voice over Internet Protocol (VoIP). The QoS is measured in terms of lost packet ratio, latency and jitter using different encryption algorithms, no security and just the use of IP firewalls in Local and Wide Area Networks (LAN and WAN). The results of laboratory tests indicate that the impact on the overall performance of VoIP depends upon the bandwidth availability and encryption algorithm used. The implementation of any encryption algorithm in low bandwidth environments degrades the voice quality due to increased loss packets and packet latency, but as bandwidth increases encrypted VoIP calls provided better service compared to an unsecured environment.
Resumo:
Tutkimuksen tavoitteena oli selvittää miten kehittää yrityksen nykyistä e-palvelujärjestelmää, Internet -teknologiaan perustuvaa sähköisiä kommunikaatio- ja tiedonjakojärjestelmää, yrityksen business-to-business asiakkuuksien johtamisessa. Tavoitteena oli myös luoda ehdotukset uusista e-palvelusopimusmalleista. Tutkimuksen teoriaosuudessa pyrittiin kehittämään aikaisempiin tutkimuksiin, tietokirjallisuuteen ja asiantuntijoihin perustuva viitekehysmalli. Empiirisessä osassa tutkimuksen tavoitteisiin pyrittiin haastattelemalla yrityksen asiakkaita ja henkilöstöä, sekä tarkastelemalla asiakaskontaktien nykyistä tilaa ja kehittymistä. Näiden tietojen perusteella selvitettiin e-palvelun käyttäjien tarpeita, profiilia ja valmiuksia palvelun käyttöön sekä palvelun nykyistä houkuttelevuutta. Tutkimuksen teoriaosan lähdeaineistona käytettiin kirjallisuutta, artikkeleita ja tilastoja asiakashallinnasta sekä e-palveluiden, erityisesti Internet ja verkkopalveluiden markkinoinnista, nykytilasta sekä palveluiden kehittämisestä. Lisäksi tutkittiin kirjallisuutta arvoverkostoanalyysistä, asiakkaan arvosta, informaatioteknologiasta, palvelun laadusta ja asiakastyytyväisyydestä. Tutkimuksen empiirinen osa perustuu yrityksen henkilöstöltä sekä asiakkailta haastatteluissa kerättyihin tietoihin, yrityksen ennalta keräämiin materiaaleihin sekä Taloustutkimuksen keräämiin tietoihin. Tutkimuksessa käytettiin case -menetelmää, joka oli yhdistelmä sekä kvalitatiivista että kvantitatiivista tutkimusta. Casen tarkoituksena oli testata mallin paikkansapitävyyttä ja käyttökelpoisuutta, sekä selvittää onko olemassa vielä muita tekijöitä, jotka vaikuttavat asiakkaan saamaan arvoon. Kvalitatiivinen aineisto perustuu teemahaastattelumenetelmää soveltaen haastateltuihin asiakkaisiin ja yrityksen työntekijöihin. Kvantitatiivinen tutkimus perustuu Taloustutkimuksen tutkimukseen ja yrityksen asiakaskontakteista kerättyyn tietoon. Haastatteluiden perusteella e-palvelut nähtiin hyödyllisinä ja tulevaisuudessa erittäin tärkeinä. E-palvelut nähdään yhtenä tärkeänä kanavana, perinteisten kanavien rinnalla, tehostaa business-to-business -asiakkuuksien johtamista. Tutkimuksen antamien tulosten mukaan asiakkaiden palveluun liittyvän tieto-, taito-, tarpeellisuus- ja kiinnostavuustasojen vaihtelevaisuus osoittaa selvän tarpeen eritasoisille e-palvelupaketti ratkaisuille. Tuloksista muodostettu ratkaisuehdotus käsittää neljän eri e-palvelupaketin rakentamisen asiakkaiden eri tarpeita mukaillen.
Resumo:
The number of qualitative research methods has grown substantially over the last twenty years, both in social sciences and, more recently, in the health sciences. This growth came with questions on the quality criteria needed to evaluate this work, and numerous guidelines were published. The latters include many discrepancies though, both in their vocabulary and construction. Many expert evaluators decry the absence of consensual and reliable evaluation tools. The authors present the results of an evaluation of 58 existing guidelines in 4 major health science fields (medicine and epidemiology; nursing and health education; social sciences and public health; psychology / psychiatry, research methods and organization) by expert users (article reviewers, experts allocating funds, editors, etc.). The results propose a toolbox containing 12 consensual criteria with the definitions given by expert users. They also indicate in which disciplinary field each type of criteria is known to be more or less essential. Nevertheless, the authors highlight the limitations of the criteria comparability, as soon as one focuses on their specific definitions. They conclude that each criterion in the toolbox must be explained to come to broader consensus and identify definitions that are consensual to all the fields examined and easily operational.
Resumo:
With qualitative methods being increasingly used in health science fields, numerous grids proposing criteria to evaluate the quality of this type of research have been produced. Expert evaluators deem that there is a lack of consensual tools to evaluate qualitative research. Based on the review of 133 quality criteria grids for qualitative research in health sciences, the authors present the results of a computerized lexicometric analysis, which confirms the variety of intra- and inter-grid constructions, including within the same field. This variety is linked to the authors' paradigmatic references underlying the criteria proposed. These references seem to be built intuitively, reflecting internal representations of qualitative research, thus making the grids and their criteria hard to compare. Consequently, the consensus on the definitions and the number of criteria becomes problematic. The paradigmatic and theoretical references of the grids should be specified so that users could better assess their contributions and limitations.
Resumo:
PURPOSE: Despite growing interest in measurement of health care quality and patient experience, the current evidence base largely derives from adult health settings, at least in part because of the absence of appropriately developed measurement tools for adolescents. To rectify this, we set out to develop a conceptual framework and a set of indicators to measure the quality of health care delivered to adolescents in hospital. METHODS: A conceptual framework was developed from the following four elements: (1) a review of the evidence around what young people perceive as "adolescent-friendly" health care; (2) an exploration with adolescent patients of the principles of patient-centered care; (3) a scoping review to identify core clinical practices around working with adolescents; and (4) a scoping review of existing conceptual frameworks. Using criteria for indicator development, we then developed a set of indicators that mapped to this framework. RESULTS: Embedded within the notion of patient- and family-centered care, the conceptual framework for adolescent-friendly health care (quality health care for adolescents) was based on the constructs of experience of care (positive engagement with health care) and evidence-informed care. A set of 14 indicators was developed, half of which related to adolescents' and parents' experience of care and half of which related to aspects of evidence-informed care. CONCLUSIONS: The conceptual framework and indicators of quality health care for adolescents set the stage to develop measures to populate these indicators, the next step in the agenda of improving the quality of health care delivered to adolescents in hospital settings.
Resumo:
This article is the result of an ongoing research into a variety of features of Spanish local government. It aims, in particular, at providing a profile of the tools implemented by local authorities to improve local democracy in Catalonia. The main hypothesis of the work is that, even though the Spanish local model is constrained by a shared and unique set of legal regulations, local institutions in Catalonia have developed their own model of local participation. And the range of instruments like these is still now increasing. More specifically, the scope of this research is twofold. On the one hand, different types of instruments for public deliberation in the Catalan local administration system are identified and presented, based on the place they take in the policy cycle. On the other hand, we focus on policy domains and the quality of the decision-making processes. Researching the stability of the participation tools or whether local democracy prefers more 'ad hoc' processes allows us to analyze the boundaries/limits of local democracy in Catalonia. The main idea underlying this paper is that, despite the existence of a single legal model regulating municipalities in Catalonia, local authorities tend to use their legally granted selfmanagement capacities to design their own instruments which end up presenting perceivable distinct features, stressing democracy in different policy domains, and in diverse policy cycles. Therefore, this paper is intended to identify such models and to provide factors (variables) so that an explanatory model can be built.
Resumo:
This thesis concentrates on studying the operational disturbance behavior of machine tools integrated into FMS. Operational disturbances are short term failures of machine tools which are especially disruptive to unattended or unmanned operation of FMS. The main objective was to examine the effect of operational disturbances on reliability and operation time distribution for machine tools. The theoretical part of the thesis covers the fimdamentals of FMS relating to the subject of this study. The concept of FMS, its benefits and operator's role in FMS operation are reviewed. The importance of reliability is presented. The terms describing the operation time of machine tools are formed by adopting standards and references. The concept of failure and indicators describing reliability and operational performance for machine tools in FMSs are presented. The empirical part of the thesis describes the research methodology which is a combination of automated (ADC) and manual data collection. By using this methodology it is possible to have a complete view of the operation time distribution for studied machine tools. Data collection was carried out in four FMSs consisting of a total of 17 machine tools. Each FMS's basic features and the signals of ADC are described. The indicators describing the reliability and operation time distribution of machine tools were calculated according to collected data. The results showed that operational disturbances have a significant influence on machine tool reliability and operational performance. On average, an operational disturbance occurs every 8,6 hours of operation time and has a down time of 0,53 hours. Operational disturbances cause a 9,4% loss in operation time which is twice the amount of losses caused by technical failures (4,3%). Operational disturbances have a decreasing influence on the utilization rate. A poor operational disturbance behavior decreases the utilization rate. It was found that the features of a part family to be machined and the method technology related to it are defining the operational disturbance behavior of the machine tool. Main causes for operational disturbances were related to material quality variations, tool maintenance, NC program errors, ATC and machine tool control. Operator's role was emphasized. It was found that failure recording activity of the operators correlates with the utilization rate. The more precisely the operators record the failure, the higher is the utilization rate. Also the FMS organizations which record failures more precisely have fewer operational disturbances.
Resumo:
Workflow management systems aim at the controlled execution of complex application processes in distributed and heterogeneous environments. These systems will shape the structure of information systems in business and non-business environments. E business and system integration is a fertile soil for WF and groupware tools. This thesis aims to study WF and groupware tools in order to gather in house knowledge of WF to better utilize WF solutions in future, and to focus on SAP Business Workflow in order to find a global solution for Application Link Enabling support for system integration. Piloting this solution in Nokia collects the experience of SAP R/3 WF tool for other development projects in future. The literary part of this study will guide to the world of business process automation providing a general description of the history, use and potentials of WF & groupware software. The empirical part of this study begins with the background of the case study describing the IT environment initiating the case by introducing SAP R/3 in Nokia, the communication technique in use and WF tool. Case study is focused in one solution with SAP Business Workflow. This study provides a concept to monitor communication between ERP systems and to increase the quality of system integration. Case study describes a way create support model for ALE/EDI interfaces. Support model includes monitoring organization and the workflow processes to solve the most common IDoc related errors.
Resumo:
Software testing is one of the essential parts in software engineering process. The objective of the study was to describe software testing tools and the corresponding use. The thesis contains examples of software testing tools usage. The study was conducted as a literature study, with focus on current software testing practices and quality assurance standards. In the paper a tool classifier was employed, and testing tools presented in study were classified according to it. We found that it is difficult to distinguish current available tools by certain testing activities as many of them contain functionality that exceeds scopes of a single testing type.
Resumo:
IT outsourcing refers to the way companies focus on their core competencies and buy the supporting functions from other companies specialized in that area. Service is the total outcome of numerous of activities by employees and other resources to provide solutions to customers' problems. Outsourcing and service business have their unique characteristics. Service Level Agreements quantify the minimum acceptable service to the user. The service quality has to be objectively quantified so that its achievement or non-achievement of it can be monitored. Usually offshoring refers to the transferring of tasks to low-cost nations. Offshoring presents a lot of challenges that require special attention and they need to be assessed thoroughly. IT Infrastructure management refers to installation and basic usability assistance of operating systems, network and server tools and utilities. ITIL defines the industry best practices for organizing IT processes. This thesis did an analysis of server operations service and the customers’ perception of the quality of daily operations. The agreed workflows and processes should be followed better. Service providers’ processes are thoroughly defined but both the customer and the service provider might disobey them. Service provider should review the workflows regarding customer functions. Customer facing functions require persistent skill development, as they communicate the quality to the customer. Service provider needs to provide better organized communication and knowledge exchange methods between the specialists in different geographical locations.
Resumo:
Dagens programvaruindustri står inför alltmer komplicerade utmaningar i en värld där programvara är nästan allstädes närvarande i våra dagliga liv. Konsumenten vill ha produkter som är pålitliga, innovativa och rika i funktionalitet, men samtidigt också förmånliga. Utmaningen för oss inom IT-industrin är att skapa mer komplexa, innovativa lösningar till en lägre kostnad. Detta är en av orsakerna till att processförbättring som forskningsområde inte har minskat i betydelse. IT-proffs ställer sig frågan: “Hur håller vi våra löften till våra kunder, samtidigt som vi minimerar vår risk och ökar vår kvalitet och produktivitet?” Inom processförbättringsområdet finns det olika tillvägagångssätt. Traditionella processförbättringsmetoder för programvara som CMMI och SPICE fokuserar på kvalitets- och riskaspekten hos förbättringsprocessen. Mer lättviktiga metoder som t.ex. lättrörliga metoder (agile methods) och Lean-metoder fokuserar på att hålla löften och förbättra produktiviteten genom att minimera slöseri inom utvecklingsprocessen. Forskningen som presenteras i denna avhandling utfördes med ett specifikt mål framför ögonen: att förbättra kostnadseffektiviteten i arbetsmetoderna utan att kompromissa med kvaliteten. Den utmaningen attackerades från tre olika vinklar. För det första förbättras arbetsmetoderna genom att man introducerar lättrörliga metoder. För det andra bibehålls kvaliteten genom att man använder mätmetoder på produktnivå. För det tredje förbättras kunskapsspridningen inom stora företag genom metoder som sätter samarbete i centrum. Rörelsen bakom lättrörliga arbetsmetoder växte fram under 90-talet som en reaktion på de orealistiska krav som den tidigare förhärskande vattenfallsmetoden ställde på IT-branschen. Programutveckling är en kreativ process och skiljer sig från annan industri i det att den största delen av det dagliga arbetet går ut på att skapa något nytt som inte har funnits tidigare. Varje programutvecklare måste vara expert på sitt område och använder en stor del av sin arbetsdag till att skapa lösningar på problem som hon aldrig tidigare har löst. Trots att detta har varit ett välkänt faktum redan i många decennier, styrs ändå många programvaruprojekt som om de vore produktionslinjer i fabriker. Ett av målen för rörelsen bakom lättrörliga metoder är att lyfta fram just denna diskrepans mellan programutvecklingens innersta natur och sättet på vilket programvaruprojekt styrs. Lättrörliga arbetsmetoder har visat sig fungera väl i de sammanhang de skapades för, dvs. små, samlokaliserade team som jobbar i nära samarbete med en engagerad kund. I andra sammanhang, och speciellt i stora, geografiskt utspridda företag, är det mera utmanande att införa lättrörliga metoder. Vi har nalkats utmaningen genom att införa lättrörliga metoder med hjälp av pilotprojekt. Detta har två klara fördelar. För det första kan man inkrementellt samla kunskap om metoderna och deras samverkan med sammanhanget i fråga. På så sätt kan man lättare utveckla och anpassa metoderna till de specifika krav som sammanhanget ställer. För det andra kan man lättare överbrygga motstånd mot förändring genom att introducera kulturella förändringar varsamt och genom att målgruppen får direkt förstahandskontakt med de nya metoderna. Relevanta mätmetoder för produkter kan hjälpa programvaruutvecklingsteam att förbättra sina arbetsmetoder. När det gäller team som jobbar med lättrörliga och Lean-metoder kan en bra uppsättning mätmetoder vara avgörande för beslutsfattandet när man prioriterar listan över uppgifter som ska göras. Vårt fokus har legat på att stöda lättrörliga och Lean-team med interna produktmätmetoder för beslutsstöd gällande så kallad omfaktorering, dvs. kontinuerlig kvalitetsförbättring av programmets kod och design. Det kan vara svårt att ta ett beslut att omfaktorera, speciellt för lättrörliga och Lean-team, eftersom de förväntas kunna rättfärdiga sina prioriteter i termer av affärsvärde. Vi föreslår ett sätt att mäta designkvaliteten hos system som har utvecklats med hjälp av det så kallade modelldrivna paradigmet. Vi konstruerar även ett sätt att integrera denna mätmetod i lättrörliga och Lean-arbetsmetoder. En viktig del av alla processförbättringsinitiativ är att sprida kunskap om den nya programvaruprocessen. Detta gäller oavsett hurdan process man försöker introducera – vare sig processen är plandriven eller lättrörlig. Vi föreslår att metoder som baserar sig på samarbete när processen skapas och vidareutvecklas är ett bra sätt att stöda kunskapsspridning på. Vi ger en översikt över författarverktyg för processer på marknaden med det förslaget i åtanke.
Resumo:
Software quality has become an important research subject, not only in the Information and Communication Technology spheres, but also in other industries at large where software is applied. Software quality is not a happenstance; it is defined, planned and created into the software product throughout the Software Development Life Cycle. The research objective of this study is to investigate the roles of human and organizational factors that influence software quality construction. The study employs the Straussian grounded theory. The empirical data has been collected from 13 software companies, and the data includes 40 interviews. The results of the study suggest that tools, infrastructure and other resources have a positive impact on software quality, but human factors involved in the software development processes will determine the quality of the products developed. On the other hand, methods of development were found to bring little effect on software quality. The research suggests that software quality is an information-intensive process whereby organizational structures, mode of operation, and information flow within the company variably affect software quality. The results also suggest that software development managers influence the productivity of developers and the quality of the software products. Several challenges of software testing that affect software quality are also brought to light. The findings of this research are expected to benefit the academic community and software practitioners by providing an insight into the issues pertaining to software quality construction undertakings.