959 resultados para Process capability index
Resumo:
This paper considers the question of which is better: the batch or the continuous activated sludge processes? It is an important question because dissension still exists in the wastewater industry as to the relative merits of each of the processes. A review of perceived differences in the processes from the point of view of two related disciplines, process engineering and biotechnology, is presented together with the results of previous comparative studies. These reviews highlight possible areas where more understanding is required. This is provided in the paper by application of the flexibility index to two case studies. The flexibility index is a useful process design tool that measures the ability of the process to cope with long term changes in operation.
Resumo:
O artigo apresenta uma an??lise das principais caracter??sticas da reforma administrativa empreendida no Reino Unido a partir da do primeiro governo Thatcher em 1979. Inicialmente, s??o descritos dois aspectos peculiares que, segundo os autores, explicam a intensidade das reformas administrativas a?? empreendidas: seu sistema pol??tico, no qual sobressai sobremaneira a alta capacidade decis??ria do Executivo, e as debilidades de seu sistema administrativo, alvo de cr??ticas reiteradas desde o Relat??rio do Comit?? Fulton, publicado em 1968. A partir disso, os autores descrevem tr??s fases recentes na reforma administrativa inglesa p??s-Thatcher. Nesta descri????o, s??o enfocadas as principais caracter??sticas e experi??ncias inovadoras adotadas, enfatizando, entre outros: 1) os chamados ???escrut??neos de Rayner???; 2) os sistemas de informa????es gerenciais adotados (Management Information System for Ministers e o Management Accounting System); 3) o programa Citizen???s Charter; 4) o processo de privatiza????o ingl??s; 5) a experi??ncia de contrata????o externa de servi??os (com a ado????o de instrumentos como a ???licita????o competitiva???, que permite aos pr??prios servidores p??blicos apresentarem propostas para presta????o de servi??os em competi????o com as empresas privadas, al??m dos sistemas Market Testing e Competing for Quality) e, por fim; 6) a pol??tica de gest??o de recursos humanos, com destaque para o forte processo de demiss??es no servi??o p??blico, o sistema de avalia????o de desempenho dos funcion??rios e de remunera????o por performance adotados no Reino Unido.
Resumo:
This paper is part of the results from the project "Implementation Strategies and Development of an Open and Distance Education System for the University of the Azores" funded by the European Social Fund. http://hdl.handle.net/10400.3/2327
Resumo:
OBJECTIVE Develop an index to evaluate the maternal and neonatal hospital care of the Brazilian Unified Health System.METHODS This descriptive cross-sectional study of national scope was based on the structure-process-outcome framework proposed by Donabedian and on comprehensive health care. Data from the Hospital Information System and the National Registry of Health Establishments were used. The maternal and neonatal network of Brazilian Unified Health System consisted of 3,400 hospitals that performed at least 12 deliveries in 2009 or whose number of deliveries represented 10.0% or more of the total admissions in 2009. Relevance and reliability were defined as criteria for the selection of variables. Simple and composite indicators and the index of completeness were constructed and evaluated, and the distribution of maternal and neonatal hospital care was assessed in different regions of the country.RESULTS A total of 40 variables were selected, from which 27 single indicators, five composite indicators, and the index of completeness of care were built. Composite indicators were constructed by grouping simple indicators and included the following variables: hospital size, level of complexity, delivery care practice, recommended hospital practice, and epidemiological practice. The index of completeness of care grouped the five variables and classified them in ascending order, thereby yielding five levels of completeness of maternal and neonatal hospital care: very low, low, intermediate, high, and very high. The hospital network was predominantly of small size and low complexity, with inadequate child delivery care and poor development of recommended and epidemiological practices. The index showed that more than 80.0% hospitals had a low index of completeness of care and that most qualified heath care services were concentrated in the more developed regions of the country.CONCLUSIONS The index of completeness proved to be of great value for monitoring the maternal and neonatal hospital care of Brazilian Unified Health System and indicated that the quality of health care was unsatisfactory. However, its application does not replace specific evaluations.
Resumo:
The aim of this study is to optimize the heat flow through the pultrusion die assembly system on the manufacturing process of a specific glass-fiber reinforced polymer (GFRP) pultrusion profile. The control of heat flow and its distribution through whole die assembly system is of vital importance in optimizing the actual GFRP pultrusion process. Through mathematical modeling of heating-die process, by means of Finite Element Analysis (FEA) program, an optimum heater selection, die position and temperature control was achieved. The thermal environment within the die was critically modeled relative not only to the applied heat sources, but also to the conductive and convective losses, as well as the thermal contribution arising from the exothermic reaction of resin matrix as it cures or polymerizes from the liquid to solid condition. Numerical simulation was validated with basis on thermographic measurements carried out on key points along the die during pultrusion process.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
SUMMARY The aim of this study was to evaluate the effects of the protein-calorie malnutrition in BALB/c isogenic mice infected with Lacazia loboi, employing nutritional and histopathological parameters. Four groups were composed: G1: inoculated with restricted diet, G2: not inoculated with restricted diet, G3: inoculated with regular diet, G4: not inoculated with regular diet. Once malnutrition had been imposed, the animals were inoculated intradermally in the footpad and after four months, were sacrificed for the excision of the footpad, liver and spleen. The infection did not exert great influence on the body weight of the mice. The weight of the liver and spleen showed reduction in the undernourished groups when compared to the nourished groups. The macroscopic lesions, viability index and total number of fungi found in the footpads of the infected mice were increased in G3 when compared to G1. Regarding the histopathological analysis of the footpad, a global cellularity increase in the composition of the granuloma was observed in G3 when compared to G1, with large numbers of macrophages and multinucleated giant cells, discrete numbers of lymphocytes were present in G3 and an increase was observed in G1. The results suggest that there is considerable interaction between Jorge Lobo's disease and nutrition.
Resumo:
RESUMO: Objetivo: Este trabalho teve como objetivo contribuir para o processo de adaptação cultural do Neck Disability Index (NDI), através da análise da sua unidimensionalidade e do estudo da sua fiabilidade (consistência interna e fiabilidade teste-reteste), validade de constructo e poder de resposta. De igual forma pretendeu-se caraterizar a intervenção realizada pela fisioterapia e os resultados obtidos em pacientes com Dor Cervical Crónica (DCC). Introdução: A dor cervical é um problema cada vez mais comum nos países industrializados, constituindo uma das três condições mais frequentemente reportadas por queixas de origem músculo-esquelética. A sua incidência é um fenómeno em crescimento, com custos implicados para a sociedade. Desta forma reconhece-se a importância de um instrumento que monitorize a evolução da incapacidade funcional associada à DCC. O NDI é atualmente o instrumento de avaliação mais recomendado para avaliar a incapacidade funcional associada à dor cervical. Foi traduzido e adaptado à língua portuguesa, mas à data não foi realizada nenhuma avaliação das suas propriedades psicométricas. Por outro lado, apesar de a literatura referir que os serviços de Fisioterapia são extremamente procurados por indivíduos com DCC, em Portugal, a informação sobre a sua prática nesta condição clínica é escassa ou mesmo inexistente. Assim, e sendo a incapacidade nas atividades funcionais uma das variáveis de maior impacto da DCC e ao mesmo tempo um dos resultados principais da intervenção da Fisioterapia, importa por um lado possuir instrumentos capazes de avaliar o nível de incapacidade funcional e a sua mudança, e por outro, aferir qual a intervenção realizada pela Fisioterapia e quais os resultados obtidos. Metodologia: Realizou-se um estudo de coorte prospetivo com uma amostra de conveniência, do tipo não probabilístico, constituída por 88 pacientes com DCC de origem músculo-esquelética e causa não traumática referenciados para 6 serviços de fisioterapia / medicina física e de reabilitação de clínicas e centros de reabilitação, sendo elegíveis todos os pacientes que cumprissem os critérios de inclusão e exclusão estabelecidos. Os pacientes foram avaliados em três momentos pré-definidos: antes do início das sessões de fisioterapia ou na 1ª semana de tratamento; 4 a 7 dias após a 1ª avaliação; e 7 semanas após o início da fisioterapia. Para verificação da unidimensionalidade do NDI, procedeu-se a uma Análise Fatorial Exploratória. As propriedades psicométricas do NDI avaliadas foram a Fiabilidade (consistência interna e fiabilidade teste-reteste), a Validade de Constructo e o Poder de Resposta. Posteriormente procedeu-se à caraterização da prática da fisioterapia quanto às modalidades utilizadas, número de sessões de tratamento e duração do episódio de cuidados. Adicionalmente descreveu-se os resultados obtidos após a intervenção da fisioterapia ao nível da dor e incapacidade. Resultados: os resultados obtidos foram positivos e significativos, com a confirmação da unidimensionalidade do NDI, sendo que em todos os critérios seguidos o fator mínimo retido foi de um. Na avaliação da consistência interna o valor obtido foi acima do mínimo aceitável (α Cronbach = 0,77), enquanto o valor de fiabilidade teste-reteste foi elevado (CCI =0,95). De igual forma, os resultados foram positivos para a validade de constructo, obtendo-se uma associação positiva do NDI com a Escala Numérica da dor (END). O valores de poder de resposta reportaram uma Área Abaixo da Curva de 0,63 (IC 95%=0,51-0,75), com valor de Diferença Mínima Clinicamente importante de 5,5 pontos (sensibilidade = 69,6%; especificidade = 43,6%). Relativamente á intervenção de fisioterapia em casos de DCC verificou-se que as características da prática da fisioterapia reportadas neste relatório são de difícil comparação ou análise dada a escassez ou inexistência de trabalhos publicados sobre este assunto em pacientes com DCC. No entanto, neste estudo, encontraram-se reduções significativas na intensidade da dor e incapacidade funcional após a intervenção de fisioterapia (z= -7,16; p<0,001 e t= 10,412, p<0,05, respetivamente). Conclusão: Os resultados do presente estudo revelam que o NDI-VP possui uma boa Fiabilidade, Validade de Constructo e Poder de Resposta. Revela ainda que a intervenção da fisioterapia em casos de DCC, apesar da escassez de trabalhos publicados, proporciona uma redução significativa dos níveis de dor e incapacidade em pacientes com DCC.--------------- ABSTRACT:Objective: the aim of this study was to contribute for the process of cultural adaption of the Neck Disability Index (NDI), through the analysis of his unidimensionality and the study of his reliability (internal consistency and test-retest reliability), construct validity and responsiveness. At the same time it pretends to characterize the intervention performed by physical therapy and the results in patients with Chronic Neck Pain (CNP). Introduction: neck pain is a common problem in the industrialized countries, since is one of the three most reported conditions by complaints with musculoskeletal origin. His incidence is a growth phenomena that implicate great costs to society. Therefore the importance of an instrument that monitories the evolution of the functional disability associated to CNP is recognized. Nowadays, NDI is the instrument most recommended to evaluate functional disability associated to neck pain. It has been translated and adapted to portuguese but, till now, no evaluation of his psychometric proprieties has been completed. In the other hand, despite literature refers that physical therapy services are extremely searched by patients with neck pain, in Portugal, the information about practice in this clinical condition is very few or inexistent. Therefore, and since disability in the functional activities is one of the variables with most impact of CNP and, at the same time, one of the main results of physical therapy interventions, it’s important to have instruments capable of evaluate the level of functional disability and his change, and also calculate which intervention of physical therapy is most appropriate and his results. Methodology: it was used a prospective cohort study with a convenience sample, non-probabilistic, consisting of 88 patients with CNP of musculoskeletal origin and non-traumatic cause, referred to 6 physical therapy services of clinics and rehabilitation centers, and fulfilled the inclusion and exclusion criteria established. Patients were evaluated in three pre-defined moments: before the beginning of physical therapy or during the first week of treatment; 4 to 7 days after the first evaluation; and 7 weeks after beginning of physical therapy. To verify NDI unidimensionality, we run an Exploratory Factorial Analysis. NDI psychometric proprieties evaluated were reliability (internal consistency and test-retest reliability), construct validity and responsiveness. Subsequently, it was proceeded the characterization the practice of physical therapy regarding to the modalities used, the number of treatment sessions and duration of the episode of care. Additionally it was described the results obtained after the intervention of the physical therapy, the level of pain and the disability. Results: results were positive and significant, with the confirmation of the NDI unidimensionality, since in every followed criteria the minimal retained factor was one. In the evaluation of internal consistency the value was above the minimal accepted (α Cronbach = 0,77), and the test-retest reliability value was high (CCI =0,95). Results were positive to construct validity, with an positive association of the NDI with Numeric Rating Scale (NRS). Responsiveness values reported an Area Under Curve (AUC) of 0,63 (IC 95%=0,51-0,75) with a Minimal Important Detectable Change (MIDC) of 5,5 points (sensitivity = 69,9%; specificity = 43,6%). Regarding physical therapy interventions in CNP, it was verified that the physical therapy characteristics reported are difficult to compare or analyze since there are very few published studies about this topic. However, in this study, significant reductions were founded in pain intensity and functional disability after intervention(z= -7,16; p<0,001 and t= 10,412, p<0,05, respectively).Conclusion: present study results reveals that NDI has an good reliability, construct validity and responsiveness. It also reveals that physical therapy intervention in CNP, beside few studies published, result in a significant reduction of pain and disability levels in patients with CNP.
Resumo:
Requirements Engineering has been acknowledged an essential discipline for Software Quality. Poorly-defined processes for eliciting, analyzing, specifying and validating requirements can lead to unclear issues or misunderstandings on business needs and project’s scope. These typically result in customers’ non-satisfaction with either the products’ quality or the increase of the project’s budget and duration. Maturity models allow an organization to measure the quality of its processes and improve them according to an evolutionary path based on levels. The Capability Maturity Model Integration (CMMI) addresses the aforementioned Requirements Engineering issues. CMMI defines a set of best practices for process improvement that are divided into several process areas. Requirements Management and Requirements Development are the process areas concerned with Requirements Engineering maturity. Altran Portugal is a consulting company concerned with the quality of its software. In 2012, the Solution Center department has developed and applied successfully a set of processes aligned with CMMI-DEV v1.3, what granted them a Level 2 maturity certification. For 2015, they defined an organizational goal of addressing CMMI-DEV maturity level 3. This MSc dissertation is part of this organization effort. In particular, it is concerned with the required process areas that address the activities of Requirements Engineering. Our main goal is to contribute for the development of Altran’s internal engineering processes to conform to the guidelines of the Requirements Development process area. Throughout this dissertation, we started with an evaluation method based on CMMI and conducted a compliance assessment of Altran’s current processes. This allowed demonstrating their alignment with the CMMI Requirements Management process area and to highlight the improvements needed to conform to the Requirements Development process area. Based on the study of alternative solutions for the gaps found, we proposed a new Requirements Management and Development process that was later validated using three different approaches. The main contribution of this dissertation is the new process developed for Altran Portugal. However, given that studies on these topics are not abundant in the literature, we also expect to contribute with useful evidences to the existing body of knowledge with a survey on CMMI and requirements engineering trends. Most importantly, we hope that the implementation of the proposed processes’ improvements will minimize the risks of mishandled requirements, increasing Altran’s performance and taking them one step further to the desired maturity level.
Resumo:
Pavements require maintenance in order to provide good service levels during their life period. Because of the significant costs of this operation and the importance of a proper planning, a pavement evaluation methodology, named Pavement Condition Index (PCI), was created by the U.S. Army Corps of Engineers. This methodology allows for the evaluation of the pavement condition along the life period, generally yearly, with minimum costs and, in this way, it is possible to plan the maintenance action and to adopt adequate measures, minimising the rehabilitation costs. The PCI methodology provides an evaluation based on visual inspection, namely on the distresses observed on the pavement. This condition index of the pavement is classified from 0 to 100, where 0 it is the worst possible condition and 100 the best possible condition. This methodology of pavement assessment represents a significant tool for management methods such as airport pavement management system (APMS) and life-cycle costs analysis (LCCA). Nevertheless, it has some limitations which can jeopardize the correct evaluation of the pavement behavior. Therefore the objective of this dissertation is to help reducing its limitations and make it easier and faster to use. Thus, an automated process of PCI calculation was developed, avoiding the abaci consultation, and consequently, minimizing the human error. To facilitate also the visual inspection a Tablet application was developed to replace the common inspection data sheet and thus making the survey easier to be undertaken. Following, an airport pavement condition was study accordingly with the methodology described at Standard Test Method for Airport Pavement Condition Index Surveys D5340, 2011 where its original condition level is compared with the condition level after iterate possible erroneous considered distresses as well as possible rehabilitations. Afterwards, the results obtained were analyzed and the main conclusions presented together with some future developments.
Resumo:
Application of Experimental Design techniques has proven to be essential in various research fields, due to its statistical capability of processing the effect of interactions among independent variables, known as factors, in a system’s response. Advantages of this methodology can be summarized in more resource and time efficient experimentations while providing more accurate results. This research emphasizes the quantification of 4 antioxidants extraction, at two different concentration, prepared according to an experimental procedure and measured by a Photodiode Array Detector. Experimental planning was made following a Central Composite Design, which is a type of DoE that allows to consider the quadratic component in Response Surfaces, a component that includes pure curvature studies on the model produced. This work was executed with the intention of analyzing responses, peak areas obtained from chromatograms plotted by the Detector’s system, and comprehending if the factors considered – acquired from an extensive literary review – produced the expected effect in response. Completion of this work will allow to take conclusions regarding what factors should be considered for the optimization studies of antioxidants extraction in a Oca (Oxalis tuberosa) matrix.
Resumo:
The year is 2015 and the startup and tech business ecosphere has never seen more activity. In New York City alone, the tech startup industry is on track to amass $8 billion dollars in total funding – the highest in 7 years (CB Insights, 2015). According to the Kauffman Index of Entrepreneurship (2015), this figure represents just 20% of the total funding in the United States. Thanks to platforms that link entrepreneurs with investors, there are simply more funding opportunities than ever, and funding can be initiated in a variety of ways (angel investors, venture capital firms, crowdfunding). And yet, in spite of all this, according to Forbes Magazine (2015), nine of ten startups will fail. Because of the unpredictable nature of the modern tech industry, it is difficult to pinpoint exactly why 90% of startups fail – but the general consensus amongst top tech executives is that “startups make products that no one wants” (Fortune, 2014). In 2011, author Eric Ries wrote a book called The Lean Startup in attempts to solve this all-too-familiar problem. It was in this book where he developed the framework for The Hypothesis-Driven Entrepreneurship Process, an iterative process that aims at proving a market before actually launching a product. Ries discusses concepts such as the Minimum Variable Product, the smallest set of activities necessary to disprove a hypothesis (or business model characteristic). Ries encourages acting briefly and often: if you are to fail, then fail fast. In today’s fast-moving economy, an entrepreneur cannot afford to waste his own time, nor his customer’s time. The purpose of this thesis is to conduct an in-depth of analysis of Hypothesis-Driven Entrepreneurship Process, in order to test market viability of a reallife startup idea, ShowMeAround. This analysis will follow the scientific Lean Startup approach; for the purpose of developing a functional business model and business plan. The objective is to conclude with an investment-ready startup idea, backed by rigorous entrepreneurial study.
Resumo:
Given the current economic situation of the Portuguese municipalities, it is necessary to identify the priority investments in order to achieve a more efficient financial management. The classification of the road network of the municipality according to the occurrence of traffic accidents is fundamental to set priorities for road interventions. This paper presents a model for road network classification based on traffic accidents integrated in a geographic information system. Its practical application was developed through a case study in the municipality of Barcelos. An equation was defined to obtain a road safety index through the combination of the following indicators: severity, property damage only and accident costs. In addition to the road network classification, the application of the model allows to analyze the spatial coverage of accidents in order to determine the centrality and dispersion of the locations with the highest incidence of road accidents. This analysis can be further refined according to the nature of the accidents namely in collision, runoff and pedestrian crashes.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
Software engineering, software measurement, software process engineering, capability, maturity