954 resultados para Object-oriented methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usage of digital content, such as video clips and images, has increased dramatically during the last decade. Local image features have been applied increasingly in various image and video retrieval applications. This thesis evaluates local features and applies them to image and video processing tasks. The results of the study show that 1) the performance of different local feature detector and descriptor methods vary significantly in object class matching, 2) local features can be applied in image alignment with superior results against the state-of-the-art, 3) the local feature based shot boundary detection method produces promising results, and 4) the local feature based hierarchical video summarization method shows promising new new research direction. In conclusion, this thesis presents the local features as a powerful tool in many applications and the imminent future work should concentrate on improving the quality of the local features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tässä pro gradu -tutkielmassa tarkastellaan osaamisen johtamista Lappeenrannan seurakuntayhtymässä kirkkoherrojen näkökulmasta. Tutkimuksen tavoitteena on selvittää, miten kirkkoherrojen osaamisen johtamista voidaan kehittää. Tutkielmassa tarkastellaan kirkkoherrojen roolia ja tehtäviä sekä käytössä olevia osaamisen kehittämisen menetelmiä. Lisäksi paneudutaan osaamisen johtamisen haasteisiin ja hengellisen työn erityispiirteisiin. Tutkimus toteutettiin kvalitatiivisena tapaustutkimuksena. Tutkimuksen empiirinen aineisto kerättiin haastattelemalla Lappeenrannan seurakuntayhtymän kaikkia viittä kirkkoherraa. Tutkimuksen tulosten perusteella voidaan havaita, että osaamisen johtaminen ei seurakuntayhtymässä ole kovin suunnitelmallista tai pitkäjänteistä. Tulevaisuuden haasteina nähdään etenkin kirkon yhteiskunnallisen aseman muuttuminen ja jäsenmäärän väheneminen. Suurimpana osaamisen johtamiseen liittyvänä haasteena kirkkoherrat kokevat ajan puutteen. Kirkkoherrojen näkemyksissä omasta roolistaan osaamisen johtamisessa korostuvat kokonaisuuksien hallinta, yleisten suuntaviivojen määrittely ja yhteisen suunnan selkiyttäminen. Osaamisen kehittämisen menetelmiä on käytössä monia, mutta pääpaino on keskusteluissa ja palavereissa sekä koulutuksissa. Hengellisen työn erityispiirteinä nähdään kirkon erityinen arvomaailma sekä uskon henkilökohtainen ja intiimi olemus. Osaaminen tulisi seurakuntayhtymässä ottaa tietoiseksi johtamisen kohteeksi. Kirkkoherrat voivat kehittää omaa osaamisen johtamistaan parantamalla tietoisuutta esimiehen eri rooleista ja tehtäväkentistä. Erityisesti yksilöiden oppimisen tukemiseen ja oppimista edistävän ilmapiirin luomiseen tulisi tulevaisuudessa kiinnittää huomiota. Osaamisen kehittämisen menetelmistä suositeltavia ovat etenkin erilaiset työssä oppimisen keinot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digitalisaation myötä myös liikenteestä tulee yhä älykkäämpää. Valtiovalta purkaa sääntelyä ja sallii digitaalisten menetelmien laajempaa käyttöä. Kuljettajakoulutusta pidetään toimialana kuitenkin hyvin konventionaalisena. Diplomityön tarkoituksena on tutkia, mitä digitalisaatio tarkoittaa kuljettajakoulutusyritysten liiketoimintamalleille. Empiiristä aineistoa saatiin teemahaastatteluin ja aineistoa analysoitiin laadullisin menetelmin. Työssä esitellään alan vahvuudet, heikkoudet, mahdollisuudet ja uhat sekä tulevaisuuden skenaariot. Digitalisaatio aiheuttaa merkittäviä muutoksia kuljettajakoulutusalan yrityksille. Auto ei ole enää entisenlainen statussymboli eikä rahan käytön kohde. Digitaaliajan ihmiset eivät aina kaipaa fyysistä liikkumista, kun vielä kivijalkakaupatkin vähenevät. Ajokorttia ei useinkaan koeta välttämättömäksi aikuistumisriitiksi. Uusi teknologia voi kuitenkin radikaalisti parantaa alan yritysten suorituskykyä: palvelut muuttuvat ajasta ja paikasta riippumattomiksi sekä skaalautuviksi. Kuluttajien kannalta digitalisaatio puolestaan parantaa asiakaslähtöisyyttä. Alan liiketoimintamallien kehittymiseen vaikuttaa neljä taustavoimaa: digitalisaatio, perinteet, sääntely ja yrittäjyys. Liiketoimintamalli sisältää opetukselliset ydintoiminnot, sisäiset prosessit, liiketoiminnan tukitoiminnot ja arvoehdotuksen asiakkaalle. Liiketoiminnan kehittäminen vastaamaan digitalisaation vaatimuksia edellyttää proaktiivista innovaatiostrategiaa. Siihen perustuvien innovaatiomenetelmien avulla yritys voi kehittää liiketoimintamalliaan digitalisaation tarjoamien ja tiedon asymmetriasta kumpuavien mahdollisuuksien hyödyntämiseksi.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT Towards a contextual understanding of B2B salespeople’s selling competencies − an exploratory study among purchasing decision-makers of internationally-oriented technology firms The characteristics of modern selling can be classified as follows: customer retention and loyalty targets, database and knowledge management, customer relationship management, marketing activities, problem solving and system selling, and satisfying needs and creating value. For salespeople to be successful in this environment, they need a wide range of competencies. Salespeople’s selling skills are well documented in seller side literature through quantitative methods, but the knowledge, skills and competencies from the buyer’s perspective are under-researched. The existing research on selling competencies should be broadened and updated through a qualitative research perspective due to the dynamic nature and the contextual dependence of selling competencies. The purpose of the study is to increase understanding of the professional salesperson’s selling competencies from the industrial purchasing decision- makers’ viewpoint within the relationship selling context. In this study, competencies are defined as sales-related knowledge and skills. The scope of the study includes goods, materials and services managed by a company’s purchasing function and used by an organization on a daily basis. The abductive approach and ‘systematic combining’ have been applied as a research strategy. In this research, data were generated through semi- structured, person-to-person interviews and open-ended questions. The study was conducted among purchasing decision-makers in the technology industry in Finland. The branches consisted of the electronics and electro-technical industries and the mechanical engineering and metals industries. A total of 30 companies and one purchasing decision-maker from each company were purposively chosen for the sampling. The sample covers different company sizes based on their revenues, their differing structures – varying from public to family companies –that represent domestic and international ownerships. Before analyzing the data, they were organized by the purchasing orientations of the buyers: the buying, procurement or supply management orientation. Thematic analysis was chosen as the analysis method. After analyzing the data, the results were contrasted with the theory. There was a continuous interaction between the empirical data and the theory. Based on the findings, a total of 19 major knowledge and skills were identified from the buyers’ perspective. The specific knowledge and skills from the viewpoint of customers’ prevalent purchasing orientations were divided into two categories, generic and contextual. The generic knowledge and skills apply to all purchasing orientations, and the contextual knowledge and skills depend on customers’ prevalent purchasing orientations. Generic knowledge and skills relate to price setting, negotiation, communication and interaction skills, while contextual ones relate to knowledge brokering, ability to present solutions and relationship skills. Buying-oriented buyers value salespeople who are ‘action oriented experts, however at a bit of an arm’s length’, procurement buyers value salespeople who are ‘experts deeply dedicated to the customer and fostering the relationship’ and supply management buyers value salespeople who are ‘corporate-oriented experts’. In addition, the buyer’s perceptions on knowledge and selling skills differ from the seller’s ones. The buyer side emphasizes managing the subject matter, consisting of the expertise, understanding the customers’ business and needs, creating a customized solution and creating value, reliability and an ability to build long-term relationships, while the seller side emphasizes communica- tion, interaction and salesmanship skills. The study integrates the selling skills of the current three-component model− technical knowledge, salesmanship skills, interpersonal skills− and relationship skills and purchasing orientations, into a selling competency model. The findings deepen and update the content of these knowledges and skills in the B2B setting and create new insights into them from the buyer’s perspective, and thus the study increases contextual understanding of selling competencies. It generates new knowledge of the salesperson’s competencies for the relationship selling and personal selling and sales management literature. It also adds knowledge of the buying orientations to the buying behavior literature. The findings challenge sales management to perceive salespeople’s selling skills both from a contingency and competence perspective. The study has several managerial implications: it increases understanding of what the critical selling knowledge and skills from the buyer’s point of view are, understanding of how salespeople effectively implement the relationship marketing concept, sales management’s knowledge of how to manage the sales process more effectively and efficiently, and the knowledge of how sales management should develop a salesperson’s selling competencies when managing and developing the sales force. Keywords: selling competencies, knowledge, selling skills, relationship skills, purchasing orientations, B2B selling, abductive approach, technology firms

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ohjelmiston suorituskyky on kokonaisvaltainen asia, johon kaikki ohjelmiston elinkaaren vaiheet vaikuttavat. Suorituskykyongelmat johtavat usein projektien viivästymisiin, kustannusten ylittymisiin sekä joissain tapauksissa projektin täydelliseen epäonnistumiseen. Software performance engineering (SPE) on ohjelmistolähtöinen lähestysmistapa, joka tarjoaa tekniikoita suorituskykyisen ohjelmiston kehittämiseen. Tämä diplomityö tutkii näitä tekniikoita ja valitsee niiden joukosta ne, jotka soveltuvat suorituskykyongelmien ratkaisemiseen kahden IT-laitehallintatuotteen kehityksessä. Työn lopputuloksena on päivitetty versio nykyisestä tuotekehitysprosessista, mikä huomioi sovellusten suorituskykyyn liittyvät haasteet tuotteiden elinkaaren eri vaiheissa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au Québec, près de 25 000 personnes, principalement des aînés, sont touchées par la maladie de Parkinson (MP), la majorité étant soignée par leur conjoint. Au stade modéré, la MP altère la santé et la qualité de vie de ces couples. Ce stade est propice à la mise en place d’interventions dyadiques, car les couples expérimentent des pertes croissantes, nécessitant plusieurs ajustements. Néanmoins, aucune étude n’avait encore examiné leurs besoins d’intervention lors de cette transition et peu d’interventions pour les soutenir ont fait l’objet d’études évaluatives. Avec comme cadre de référence la théorie de l’expérience de transition de Meleis et al. (2000) et l’approche systémique de Wright et Leahey (2009), cette étude visait à développer, mettre à l’essai et évaluer une intervention auprès de couples âgés vivant avec la MP au stade modéré. À cette fin, un devis qualitatif et une approche participative ont été privilégiés. L’élaboration et l’évaluation de l’intervention s’appuient sur le cadre méthodologique d’Intervention Mapping de Bartholomew et al. (2006) et sur les écrits de Miles et Huberman (2003). L’étude s’est déroulée dans une clinique ambulatoire spécialisée dans la MP. Dix couples et quatre intervenants ont collaboré à la conceptualisation de l’intervention. Trois nouveaux couples en ont fait l’expérimentation et l’évaluation. L’intervention dyadique compte sept rencontres de 90 minutes, aux deux semaines. Les principaux thèmes, les méthodes et les stratégies d’intervention sont basés sur les besoins et les objectifs des dyades ainsi que sur des théories et des écrits empiriques. L’intervention est orientée vers les préoccupations des dyades, la promotion de la santé, la résolution de problèmes, l’accès aux ressources, la communication et l’ajustement des rôles. Les résultats de l’étude ont montré la faisabilité, l’acceptabilité et l’utilité de l’intervention. Les principales améliorations notées par les dyades sont l’adoption de comportements de santé, la recherche de solutions ajustées aux situations rencontrées et profitables aux deux partenaires, la capacité de faire appel à des services et l’accroissement des sentiments de maîtrise, de soutien mutuel, de plaisir et d’espoir. Cette étude fournit des pistes aux infirmières, engagées dans différents champs de pratique, pour développer et évaluer des interventions dyadiques écologiquement et théoriquement fondées.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le domaine biomédical est probablement le domaine où il y a les ressources les plus riches. Dans ces ressources, on regroupe les différentes expressions exprimant un concept, et définit des relations entre les concepts. Ces ressources sont construites pour faciliter l’accès aux informations dans le domaine. On pense généralement que ces ressources sont utiles pour la recherche d’information biomédicale. Or, les résultats obtenus jusqu’à présent sont mitigés : dans certaines études, l’utilisation des concepts a pu augmenter la performance de recherche, mais dans d’autres études, on a plutôt observé des baisses de performance. Cependant, ces résultats restent difficilement comparables étant donné qu’ils ont été obtenus sur des collections différentes. Il reste encore une question ouverte si et comment ces ressources peuvent aider à améliorer la recherche d’information biomédicale. Dans ce mémoire, nous comparons les différentes approches basées sur des concepts dans un même cadre, notamment l’approche utilisant les identificateurs de concept comme unité de représentation, et l’approche utilisant des expressions synonymes pour étendre la requête initiale. En comparaison avec l’approche traditionnelle de "sac de mots", nos résultats d’expérimentation montrent que la première approche dégrade toujours la performance, mais la seconde approche peut améliorer la performance. En particulier, en appariant les expressions de concepts comme des syntagmes stricts ou flexibles, certaines méthodes peuvent apporter des améliorations significatives non seulement par rapport à la méthode de "sac de mots" de base, mais aussi par rapport à la méthode de Champ Aléatoire Markov (Markov Random Field) qui est une méthode de l’état de l’art dans le domaine. Ces résultats montrent que quand les concepts sont utilisés de façon appropriée, ils peuvent grandement contribuer à améliorer la performance de recherche d’information biomédicale. Nous avons participé au laboratoire d’évaluation ShARe/CLEF 2014 eHealth. Notre résultat était le meilleur parmi tous les systèmes participants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pedicle screw insertion technique has made revolution in the surgical treatment of spinal fractures and spinal disorders. Although X- ray fluoroscopy based navigation is popular, there is risk of prolonged exposure to X- ray radiation. Systems that have lower radiation risk are generally quite expensive. The position and orientation of the drill is clinically very important in pedicle screw fixation. In this paper, the position and orientation of the marker on the drill is determined using pattern recognition based methods, using geometric features, obtained from the input video sequence taken from CCD camera. A search is then performed on the video frames after preprocessing, to obtain the exact position and orientation of the drill. An animated graphics, showing the instantaneous position and orientation of the drill is then overlaid on the processed video for real time drill control and navigation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Heilkräuter sind während des Trocknungsprozesses zahlreichen Einflüssen ausgesetzt, welche die Qualität des Endproduktes entscheidend beeinflussen. Diese Forschungsarbeit beschäftigt sich mit der Trocknung von Zitronenmelisse (Melissa officinalis .L) zu einem qualitativ hochwertigen Endprodukt. Es werden Strategien zur Trocknung vorgeschlagen, die experimentelle und mathematische Aspekte mit einbeziehen, um bei einer adäquaten Produktivität die erforderlichen Qualitätsmerkmale im Hinblick auf Farbeänderung und Gehalt an ätherischen Ölen zu erzielen. Getrocknete Zitronenmelisse kann zurzeit, auf Grund verschiedener Probleme beim Trocknungsvorgang, den hohen Qualitätsanforderungen des Marktes nicht immer genügen. Es gibt keine standardisierten Informationen zu den einzelnen und komplexen Trocknungsparametern. In der Praxis beruht die Trocknung auf Erfahrungswerten, bzw. werden Vorgehensweisen bei der Trocknung anderer Pflanzen kopiert, und oftmals ist die Trocknung nicht reproduzierbar, oder beruht auf subjektiven Annäherungen. Als Folge dieser nicht angepassten Wahl der Trocknungsparameter entstehen oftmals Probleme wie eine Übertrocknung, was zu erhöhten Bruchverlusten der Blattmasse führt, oder eine zu geringe Trocknung, was wiederum einen zu hohen Endfeuchtegehalt im Produkt zur Folge hat. Dies wiederum mündet zwangsläufig in einer nicht vertretbaren Farbänderung und einen übermäßigen Verlust an ätherischen Ölen. Auf Grund der unterschiedlichen thermischen und mechanischen Eigenschaften von Blättern und Stängel, ist eine ungleichmäßige Trocknung die Regel. Es wird außerdem eine unnötig lange Trocknungsdauer beobachtet, die zu einem erhöhten Energieverbrauch führt. Das Trocknen in solaren Tunneln Trocknern bringt folgendes Problem mit sich: wegen des ungeregelten Strahlungseinfalles ist es schwierig die Trocknungstemperatur zu regulieren. Ebenso beeinflusst die Strahlung die Farbe des Produktes auf Grund von photochemischen Reaktionen. Zusätzlich erzeugen die hohen Schwankungen der Strahlung, der Temperatur und der Luftfeuchtigkeit instabile Bedingungen für eine gleichmäßige und kontrollierbare Trocknung. In Anbetracht der erwähnten Probleme werden folgende Forschungsschwerpunkte in dieser Arbeit gesetzt: neue Strategien zur Verbesserung der Qualität werden entwickelt, mit dem Ziel die Trocknungszeit und den Energieverbrauch zu verringern. Um eine Methodik vorzuschlagen, die auf optimalen Trocknungsparameter beruht, wurden Temperatur und Luftfeuchtigkeit als Variable in Abhängigkeit der Trocknungszeit, des ätherischer Ölgehaltes, der Farbänderung und der erforderliche Energie betrachtet. Außerdem wurden die genannten Parametern und deren Auswirkungen auf die Qualitätsmerkmale in solaren Tunnel Trocknern analysiert. Um diese Ziele zu erreichen, wurden unterschiedliche Ansätze verfolgt. Die Sorption-Isothermen und die Trocknungskinetik von Zitronenmelisse und deren entsprechende Anpassung an verschiedene mathematische Modelle wurden erarbeitet. Ebenso wurde eine alternative gestaffelte Trocknung in gestufte Schritte vorgenommen, um die Qualität des Endproduktes zu erhöhen und gleichzeitig den Gesamtenergieverbrauch zu senken. Zusätzlich wurde ein statistischer Versuchsplan nach der CCD-Methode (Central Composite Design) und der RSM-Methode (Response Surface Methodology) vorgeschlagen, um die gewünschten Qualitätsmerkmalen und den notwendigen Energieeinsatz in Abhängigkeit von Lufttemperatur und Luftfeuchtigkeit zu erzielen. Anhand der gewonnenen Daten wurden Regressionsmodelle erzeugt, und das Verhalten des Trocknungsverfahrens wurde beschrieben. Schließlich wurde eine statistische DOE-Versuchsplanung (design of experiments) angewandt, um den Einfluss der Parameter auf die zu erzielende Produktqualität in einem solaren Tunnel Trockner zu bewerten. Die Wirkungen der Beschattung, der Lage im Tunnel, des Befüllungsgrades und der Luftgeschwindigkeit auf Trocknungszeit, Farbänderung und dem Gehalt an ätherischem Öl, wurde analysiert. Ebenso wurden entsprechende Regressionsmodelle bei der Anwendung in solaren Tunneltrocknern erarbeitet. Die wesentlichen Ergebnisse werden in Bezug auf optimale Trocknungsparameter in Bezug auf Qualität und Energieverbrauch analysiert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mit der vorliegenden Arbeit soll ein Beitrag zu einer (empirisch) gehaltvollen Mikrofundierung des Innovationsgeschehens im Rahmen einer evolutorischen Perspektive geleistet werden. Der verhaltensbezogene Schwerpunkt ist dabei, in unterschiedlichem Ausmaß, auf das Akteurs- und Innovationsmodell von Herbert Simon bzw. der Carnegie-School ausgerichtet und ergänzt, spezifiziert und erweitert dieses unter anderem um vertiefende Befunde der Kreativitäts- und Kognitionsforschung bzw. der Psychologie und der Vertrauensforschung sowie auch der modernen Innovationsforschung. zudem Bezug auf einen gesellschaftlich und ökonomisch relevanten Gegenstandsbereich der Innovation, die Umweltinnovation. Die Arbeit ist sowohl konzeptionell als auch empirisch ausgerichtet, zudem findet die Methode der Computersimulation in Form zweier Multi-Agentensysteme Anwendung. Als zusammenfassendes Ergebnis lässt sich im Allgemeinen festhalten, dass Innovationen als hochprekäre Prozesse anzusehen sind, welche auf einer Verbindung von spezifischen Akteursmerkmalen, Akteurskonstellationen und Umfeldbedingungen beruhen, Iterationsschleifen unterliegen (u.a. durch Lernen, Rückkoppelungen und Aufbau von Vertrauen) und Teil eines umfassenderen Handlungs- sowie (im Falle von Unternehmen) Organisationskontextes sind. Das Akteurshandeln und die Interaktion von Akteuren sind dabei Ausgangspunkt für Emergenzen auf der Meso- und der Makroebene. Die Ergebnisse der Analysen der in dieser Arbeit enthaltenen fünf Fachbeiträge zeigen im Speziellen, dass der Ansatz von Herbert Simon bzw. der Carnegie-School eine geeignete theoretische Grundlage zur Erfassung einer prozessorientierten Mikrofundierung des Gegenstandsbereichs der Innovation darstellt und – bei geeigneter Ergänzung und Adaption an den jeweiligen Erkenntnisgegenstand – eine differenzierte Betrachtung unterschiedlicher Arten von Innovationsprozessen und deren akteursbasierten Grundlagen sowohl auf der individuellen Ebene als auch auf Ebene von Unternehmen ermöglicht. Zudem wird deutlich, dass der Ansatz von Herbert Simon bzw. der Carnegie-School mit dem Initiationsmodell einen zusätzlichen Aspekt in die Diskussion einbringt, welcher bislang wenig Aufmerksamkeit fand, jedoch konstitutiv für eine ökonomische Perspektive ist: die Analyse der Bestimmungsgrößen (und des Prozesses) der Entscheidung zur Innovation. Denn auch wenn das Verständnis der Prozesse bzw. der Determinanten der Erstellung, Umsetzung und Diffusion von Innovationen von grundlegender Bedeutung ist, ist letztendlich die Frage, warum und unter welchen Umständen Akteure sich für Innovationen entscheiden, ein zentraler Kernbereich einer ökonomischen Betrachtung. Die Ergebnisse der Arbeit sind auch für die praktische Wirtschaftspolitik von Bedeutung, insbesondere mit Blick auf Innovationsprozesse und Umweltwirkungen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Local descriptors are increasingly used for the task of object recognition because of their perceived robustness with respect to occlusions and to global geometrical deformations. We propose a performance criterion for a local descriptor based on the tradeoff between selectivity and invariance. In this paper, we evaluate several local descriptors with respect to selectivity and invariance. The descriptors that we evaluated are Gaussian derivatives up to the third order, gray image patches, and Laplacian-based descriptors with either three scales or one scale filters. We compare selectivity and invariance to several affine changes such as rotation, scale, brightness, and viewpoint. Comparisons have been made keeping the dimensionality of the descriptors roughly constant. The overall results indicate a good performance by the descriptor based on a set of oriented Gaussian filters. It is interesting that oriented receptive fields similar to the Gaussian derivatives as well as receptive fields similar to the Laplacian are found in primate visual cortex.