900 resultados para model potential
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Waddlia chondrophila, an obligate intracellular bacterium belonging to the Chlamydiales order, is considered as an emerging pathogen. Some clinical studies highlighted a possible role of W. chondrophila in bronchiolitis, pneumonia and miscarriage. This pathogenic potential is further supported by the ability of W. chondrophila to infect and replicate within human pneumocytes, macrophages and endometrial cells. Considering that W. chondrophila might be a causative agent of respiratory tract infection, we developed a mouse model of respiratory tract infection to get insight into the pathogenesis of W. chondrophila. Following intranasal inoculation of 2 x 108 W. chondrophila, mice lost up to 40% of their body weight, and succumbed rapidly from infection with a death rate reaching 50% at day 4 post-inoculation. Bacterial loads, estimated by qPCR, increased from day 0 to day 3 post-infection and decreased thereafter in surviving mice. Bacterial growth was confirmed by detecting dividing bacteria using electron microscopy, and living bacteria were isolated from lungs 14 days post-infection. Immunohistochemistry and histopathology of infected lungs revealed the presence of bacteria associated with pneumonia characterized by an important multifocal inflammation. The high inflammatory score in the lungs was associated with the presence of pro-inflammatory cytokines in both serum and lungs at day 3 post-infection. This animal model supports the role of W. chondrophila as an agent of respiratory tract infection, and will help understanding the pathogenesis of this strict intracellular bacterium.
Resumo:
BACKGROUND: In this study, we further investigated the association of two biomarkers, CCL18 and A1AT, with bladder cancer (BCa) and evaluated the influence of potentially confounding factors in an experimental model. METHODS: In a cohort of 308 subjects (102 with BCa), urinary concentrations of CCL18 and A1AT were assessed by enzyme-linked immunosorbent assay (ELISA). In an experimental model, benign or cancerous cells, in addition to blood, were added to urines from healthy controls and analyzed by ELISA. Lastly, immunohistochemical staining for CCL18 and A1AT in human bladder tumors was performed. RESULTS: Median urinary protein concentrations of CCL18 (52.84 pg/ml vs. 11.13 pg/ml, p < 0.0001) and A1AT (606.4 ng/ml vs. 120.0 ng/ml, p < 0.0001) were significantly elevated in BCa subjects compared to controls. Furthermore, the addition of whole blood to pooled normal urine resulted in a significant increase in both CCL18 and A1AT. IHC staining of bladder tumors revealed CCL18 immunoreactivity in inflammatory cells only, and there was no significant increase in these immunoreactive cells within benign and cancerous tissue and no association with BCa grade nor stage was noted. A1AT immunoreactivity was observed in the cytoplasm of epithelia cells and intensity of immunostaining increased with tumor grade, but not tumor stage. CONCLUSIONS: Further development of A1AT as a diagnostic biomarker for BCa is warranted.
Resumo:
Behavior-based navigation of autonomous vehicles requires the recognition of the navigable areas and the potential obstacles. In this paper we describe a model-based objects recognition system which is part of an image interpretation system intended to assist the navigation of autonomous vehicles that operate in industrial environments. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using a rule-based cooperative expert system
Resumo:
We describe a model-based objects recognition system which is part of an image interpretation system intended to assist autonomous vehicles navigation. The system is intended to operate in man-made environments. Behavior-based navigation of autonomous vehicles involves the recognition of navigable areas and the potential obstacles. The recognition system integrates color, shape and texture information together with the location of the vanishing point. The recognition process starts from some prior scene knowledge, that is, a generic model of the expected scene and the potential objects. The recognition system constitutes an approach where different low-level vision techniques extract a multitude of image descriptors which are then analyzed using a rule-based reasoning system to interpret the image content. This system has been implemented using CEES, the C++ embedded expert system shell developed in the Systems Engineering and Automatic Control Laboratory (University of Girona) as a specific rule-based problem solving tool. It has been especially conceived for supporting cooperative expert systems, and uses the object oriented programming paradigm
Resumo:
The use of two-dimensional spectral analysis applied to terrain heights in order to determine characteristic terrain spatial scales and its subsequent use for the objective definition of an adequate grid size required to resolve terrain forcing are presented in this paper. In order to illustrate the influence of grid size, atmospheric flow in a complex terrain area of the Spanish east coast is simulated by the Regional Atmospheric Modeling System (RAMS) mesoscale numerical model using different horizontal grid resolutions. In this area, a grid size of 2 km is required to account for 95% of terrain variance. Comparison among results of the different simulations shows that, although the main wind behavior does not change dramatically, some small-scale features appear when using a resolution of 2 km or finer. Horizontal flow pattern differences are significant both in the nighttime, when terrain forcing is more relevant, and in the daytime, when thermal forcing is dominant. Vertical structures also are investigated, and results show that vertical advection is influenced highly by the horizontal grid size during the daytime period. The turbulent kinetic energy and potential temperature vertical cross sections show substantial differences in the structure of the planetary boundary layer for each model configuration
Resumo:
Reformation of electricity markets has initiated creation of ancillary services markets all over the world. The Russian electricity market reform is in transition period, that is why the problem of ancillary services market has just arisen. Since the model of market rules was created, ancillary services market became a topical question for generating companies. This master’s thesis is focused on the describing the possible ancillary services around the world and in Russia specifically. Moreover, the physical interpretation of ancillary services is defined. In addition, possibility of generation company to participate in the ancillary services market was considered. Calculations were made for primary frequency regulation service, where necessary level of price bids and payback period were evaluated.
Resumo:
Rust, caused by Puccinia psidii, is one of the most important diseases affecting eucalyptus in Brazil. This pathogen causes disease in mini-clonal garden and in young plants in the field, especially in leaves and juvenile shoots. Favorable climate conditions for infection by this pathogen in eucalyptus include temperature between 18 and 25 ºC, together with at least 6-hour leaf wetness periods, for 5 to 7 consecutive days. Considering the interaction between the environment and the pathogen, this study aimed to evaluate the potential impact of global climate changes on the spatial distribution of areas of risk for the occurrence of eucalyptus rust in Brazil. Thus, monthly maps of the areas of risk for the occurrence of this disease were elaborated, considering the current climate conditions, based on a historic series between 1961 and 1990, and the future scenarios A2 and B2, predicted by IPCC. The climate conditions were classified into three categories, according to the potential risk for the disease occurrence, considering temperature (T) and air relative humidity (RH): i) high risk (18 < T < 25 ºC and RH > 90%); ii) medium risk (18 < T < 25 ºC and RH < 90%; T< 18 or T > 25 ºC and RH > 90%); and iii) low risk (T < 18 or T > 25 ºC and RH < 90%). Data about the future climate scenarios were supplied by GCM Change Fields. In this study, the simulation model Hadley Centers for Climate Prediction and Research (HadCm3) was adopted, using the software Idrisi 32. The obtained results led to the conclusion that there will be a reduction in the area favorable to eucalyptus rust occurrence, and such a reduction will be gradual for the decades of 2020, 2050 and 2080 but more marked in scenario A2 than in B2. However, it is important to point out that extensive areas will still be favorable to the disease development, especially in the coldest months of the year, i.e., June and July. Therefore, the zoning of areas and periods of higher occurrence risk, considering the global climate changes, becomes important knowledge for the elaboration of predicting models and an alert for the integrated management of this disease.
Resumo:
This study examined solution business models and how they could be applied into energy efficiency business. The target of this study was to find out, what a functional solution business model applied to energy efficiency improvement projects is like. The term “functionality” was used to refer not only to the economic viability but to environmental and legal aspects and also to the implement of Critical Success Factors (CSFs) and the ability to overcome the most important market barriers and risks. This thesis is based on a comprehensive literature study on solution business, business models and energy efficiency business. This literature review was used as a foundation to an energy efficiency solution business model scheme. The created scheme was tested in a case study which studied two different energy efficiency improvement projects, illustrated the functionality of the created business model and evaluated their potential as customer targets. Solution approach was found to be suitable for energy efficiency business. The most important characteristics of a good solution business model were identified to be the relationship between the supplier and customer, a proper network, knowledge on the customer’s process and supreme technological expertise. Thus the energy efficiency solution business was recognized to be particularly suitable for example for energy suppliers or technological equipment suppliers. Because the case study was not executed from a certain company’s point of view, the most important factors such as relationships and the availability of funding could not be evaluated. Although the energy efficiency business is recognized to be economically viable, the most important factors influencing the profitability and the success of energy efficiency solution business model were identified to be the proper risk management, the ability to overcome market barriers and the realization of CSFs.
Resumo:
The numerous methods for calculating the potential or reference evapotranspiration (ETo or ETP) almost always do for a 24-hour period, including values of climatic parameters throughout the nocturnal period (daily averages). These results have a nil effect on transpiration, constituting the main evaporative demand process in cases of localized irrigation. The aim of the current manuscript was to come up with a model rather simplified for the calculation of diurnal daily ETo. It deals with an alternative approach based on the theoretical background of the Penman method without having to consider values of aerodynamic conductance of latent and sensible heat fluxes, as well as data of wind speed and relative humidity of the air. The comparison between the diurnal values of ETo measured in weighing lysimeters with elevated precision and estimated by either the Penman-Monteith method or the Simplified-Penman approach in study also points out a fairly consistent agreement among the potential demand calculation criteria. The Simplified-Penman approach was a feasible alternative to estimate ETo under the local meteorological conditions of two field trials. With the availability of the input data required, such a method could be employed in other climatic regions for scheduling irrigation.
Resumo:
Objective: to describe and evaluate the acceptance of a low-cost chest tube insertion porcine model in a medical education project in the southwest of Paraná, Brazil. Methods: we developed a low-cost and low technology porcine model for teaching chest tube insertion and used it in a teaching project. Medical trainees - students and residents - received theoretical instructions about the procedure and performed thoracic drainage in this porcine model. After performing the procedure, the participants filled a feedback questionnaire about the proposed experimental model. This study presents the model and analyzes the questionnaire responses. Results: seventy-nine medical trainees used and evaluated the model. The anatomical correlation between the porcine model and human anatomy was considered high and averaged 8.1±1.0 among trainees. All study participants approved the low-cost porcine model for chest tube insertion. Conclusion: the presented low-cost porcine model for chest tube insertion training was feasible and had good acceptability among trainees. This model has potential use as a teaching tool in medical education.
Resumo:
The Repair of segmental defects in load-bearing long bones is a challenging task because of the diversity of the load affecting the area; axial, bending, shearing and torsional forces all come together to test the stability/integrity of the bone. The natural biomechanical requirements for bone restorative materials include strength to withstand heavy loads, and adaptivity to conform into a biological environment without disturbing or damaging it. Fiber-reinforced composite (FRC) materials have shown promise, as metals and ceramics have been too rigid, and polymers alone are lacking in strength which is needed for restoration. The versatility of the fiber-reinforced composites also allows tailoring of the composite to meet the multitude of bone properties in the skeleton. The attachment and incorporation of a bone substitute to bone has been advanced by different surface modification methods. Most often this is achieved by the creation of surface texture, which allows bone growth, onto the substitute, creating a mechanical interlocking. Another method is to alter the chemical properties of the surface to create bonding with the bone – for example with a hydroxyapatite (HA) or a bioactive glass (BG) coating. A novel fiber-reinforced composite implant material with a porous surface was developed for bone substitution purposes in load-bearing applications. The material’s biomechanical properties were tailored with unidirectional fiber reinforcement to match the strength of cortical bone. To advance bone growth onto the material, an optimal surface porosity was created by a dissolution process, and an addition of bioactive glass to the material was explored. The effects of dissolution and orientation of the fiber reinforcement were also evaluated for bone-bonding purposes. The Biological response to the implant material was evaluated in a cell culture study to assure the safety of the materials combined. To test the material’s properties in a clinical setting, an animal model was used. A critical-size bone defect in a rabbit’s tibia was used to test the material in a load-bearing application, with short- and long-term follow-up, and a histological evaluation of the incorporation to the host bone. The biomechanical results of the study showed that the material is durable and the tailoring of the properties can be reproduced reliably. The Biological response - ex vivo - to the created surface structure favours the attachment and growth of bone cells, with the additional benefit of bioactive glass appearing on the surface. No toxic reactions to possible agents leaching from the material could be detected in the cell culture study when compared to a nontoxic control material. The mechanical interlocking was enhanced - as expected - with the porosity, whereas the reinforcing fibers protruding from the surface of the implant gave additional strength when tested in a bone-bonding model. Animal experiments verified that the material is capable of withstanding load-bearing conditions in prolonged use without breaking of the material or creating stress shielding effects to the host bone. A Histological examination verified the enhanced incorporation to host bone with an abundance of bone growth onto and over the material. This was achieved with minimal tissue reactions to a foreign body. An FRC implant with surface porosity displays potential in the field of reconstructive surgery, especially regarding large bone defects with high demands on strength and shape retention in load-bearing areas or flat bones such as facial / cranial bones. The benefits of modifying the strength of the material and adjusting the surface properties with fiber reinforcement and bone-bonding additives to meet the requirements of different bone qualities are still to be fully discovered.
Resumo:
Knowledge transfer is a complex process. Knowledge transfer in the form of exporting education products from one system of education to another is particularly complicated, because each system has been developed in a particular context to meet the requirements seen as relevant at each time. National innovation systems are often seen to form an essential framework within which the development of a country, its economy and level of knowledge are considered and promoted. These systems are orientated towards the future, and as such they also provide a framework for the knowledge transfer related to the development of education. In the best of circumstances they are able to facilitate and boost this transfer both from the viewpoint of the provider and the recipient. The leading thought and the idea of the study is that education export is a form of knowledge transfer, which is illustrated by the existing models included. The purpose of this study is to explore, analyze and describe the factors and phenomena related to education export, and more specifically, those related to the experiences and potential of Finnish education export to Chile. For better understanding, of the multiplicity of the issue involved, the current status of education export between Finland and Chile and he existing efforts within the Finnish innovation network will be outlined as well as new forms of co-operation between Finland and Chile in educational matters explored. Several countries have started to commercialize their education system in order to establish themselves as emerging education exporters. Moreover, the demand for education reform is accurate in many developing countries. This offers a good match between Finland and Chile to be the example countries of the research. The main research findings suggest that there are several business areas in education export. These include degrees in education, training services and education technologies for example The factors that influence education export can be divided into four groups, including academic, cultural, political and economic aspects. Challenges to overcome include the lack of product or services to be sold, lack of market and cultural knowledge of the buyer country, financing and lack of suitable pricing model. National innovation systems could be seen as enabling entities for successful education export. The extensive networks that national innovation systems aim to form, could operate as a basis for joining the forces in selling knowledge as well as receiving knowledge in a constructive way.
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
Poverty alleviation views have shifted from seeing the poor as victims or as potential consumers, to seeing them as gainers. Social businesses include microfinancing and microfranchising, which engage people at the bottom of the pyramid using business instead of charity. There are, however, social business firms that do not fit to the existing social business model theory. These firms provide markets to poor producers and mix traditional, local craftsmanship with western design. Social business models evolve faster than the academic literature can study them. This study contributes to filling this gap. The purpose of this Master’s thesis is to develop the concept of social business as poverty alleviation method in developing countries. It also aims; 1) to describe the means for poverty alleviation in developing countries; 2) to introduce microbusiness as a social business model; and 3) to examine the challenges of microbusinesses. Qualitative case study is used as a research strategy and theme interviews as a data collecting method. The empirical data is gathered from four interviews of Finnish or Finnish-owned firms that employ microbusiness – Mifuko, Tensira, Mangomaa and Tikau – and this is supported with secondary data including articles on case companies. The results show that microbusiness is a valid new social business model that aims at poverty alleviation by engaging the poor at the bottom of the pyramid. It is possible to map the value proposition, value constellation, and economic and social profit equations of the case firms. Two major types of firms emerge from the results; the first consists of design-oriented firms that emphasize the quality and design of the products, and the second consists of bazaar-like firms whose product portfolio is less sophisticated and who promote more the stories of the products – not the design. All microbusiness firms provide markets, promote traditional handicrafts, form close relationships to their producers, and aim at enhancing lives through their businesses. The attitudes towards social businesses are sometimes negative, but this is changing for the better. In conclusion, microbusiness answers to two different needs at the same time – consumers’ needs for ethical products and the social needs of the producers – but the social need is the ultimate reason why the entrepreneurs started business. Microbusiness continues as a poverty alleviation tool that sees the poor as gainers; by providing them steady employment, microbusiness increases the poor’s self-esteem and enables them for a better living. Academic literature has not been able to offer enough alternative business models to cover all social businesses; the current study contributes to this by concluding that microbusiness is another social business model.