933 resultados para B2B-Segmentation
Resumo:
Multispectral analysis is a promising approach in tissue classification and abnormality detection from Magnetic Resonance (MR) images. But instability in accuracy and reproducibility of the classification results from conventional techniques keeps it far from clinical applications. Recent studies proposed Independent Component Analysis (ICA) as an effective method for source signals separation from multispectral MR data. However, it often fails to extract the local features like small abnormalities, especially from dependent real data. A multisignal wavelet analysis prior to ICA is proposed in this work to resolve these issues. Best de-correlated detail coefficients are combined with input images to give better classification results. Performance improvement of the proposed method over conventional ICA is effectively demonstrated by segmentation and classification using k-means clustering. Experimental results from synthetic and real data strongly confirm the positive effect of the new method with an improved Tanimoto index/Sensitivity values, 0.884/93.605, for reproduced small white matter lesions
Resumo:
The characterization and grading of glioma tumors, via image derived features, for diagnosis, prognosis, and treatment response has been an active research area in medical image computing. This paper presents a novel method for automatic detection and classification of glioma from conventional T2 weighted MR images. Automatic detection of the tumor was established using newly developed method called Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA).Statistical Features were extracted from the detected tumor texture using first order statistics and gray level co-occurrence matrix (GLCM) based second order statistical methods. Statistical significance of the features was determined by t-test and its corresponding p-value. A decision system was developed for the grade detection of glioma using these selected features and its p-value. The detection performance of the decision system was validated using the receiver operating characteristic (ROC) curve. The diagnosis and grading of glioma using this non-invasive method can contribute promising results in medical image computing
Resumo:
Efficient optic disc segmentation is an important task in automated retinal screening. For the same reason optic disc detection is fundamental for medical references and is important for the retinal image analysis application. The most difficult problem of optic disc extraction is to locate the region of interest. Moreover it is a time consuming task. This paper tries to overcome this barrier by presenting an automated method for optic disc boundary extraction using Fuzzy C Means combined with thresholding. The discs determined by the new method agree relatively well with those determined by the experts. The present method has been validated on a data set of 110 colour fundus images from DRION database, and has obtained promising results. The performance of the system is evaluated using the difference in horizontal and vertical diameters of the obtained disc boundary and that of the ground truth obtained from two expert ophthalmologists. For the 25 test images selected from the 110 colour fundus images, the Pearson correlation of the ground truth diameters with the detected diameters by the new method are 0.946 and 0.958 and, 0.94 and 0.974 respectively. From the scatter plot, it is shown that the ground truth and detected diameters have a high positive correlation. This computerized analysis of optic disc is very useful for the diagnosis of retinal diseases
Resumo:
The thesis explores the area of still image compression. The image compression techniques can be broadly classified into lossless and lossy compression. The most common lossy compression techniques are based on Transform coding, Vector Quantization and Fractals. Transform coding is the simplest of the above and generally employs reversible transforms like, DCT, DWT, etc. Mapped Real Transform (MRT) is an evolving integer transform, based on real additions alone. The present research work aims at developing new image compression techniques based on MRT. Most of the transform coding techniques employ fixed block size image segmentation, usually 8×8. Hence, a fixed block size transform coding is implemented using MRT and the merits and demerits are analyzed for both 8×8 and 4×4 blocks. The N2 unique MRT coefficients, for each block, are computed using templates. Considering the merits and demerits of fixed block size transform coding techniques, a hybrid form of these techniques is implemented to improve the performance of compression. The performance of the hybrid coder is found to be better compared to the fixed block size coders. Thus, if the block size is made adaptive, the performance can be further improved. In adaptive block size coding, the block size may vary from the size of the image to 2×2. Hence, the computation of MRT using templates is impractical due to memory requirements. So, an adaptive transform coder based on Unique MRT (UMRT), a compact form of MRT, is implemented to get better performance in terms of PSNR and HVS The suitability of MRT in vector quantization of images is then experimented. The UMRT based Classified Vector Quantization (CVQ) is implemented subsequently. The edges in the images are identified and classified by employing a UMRT based criteria. Based on the above experiments, a new technique named “MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)”is developed. Its performance is evaluated and compared against existing techniques. A comparison with standard JPEG & the well-known Shapiro’s Embedded Zero-tree Wavelet (EZW) is done and found that the proposed technique gives better performance for majority of images
Resumo:
Das Management von Kundenbeziehungen hat sich in der klassischen Ökonomie unter dem Begriff »Customer Relationship Management« (kurz: CRM) etabliert und sich in den letzten Jahren als erfolgreicher Ansatz erwiesen. In der grundlegenden Zielsetzung, wertvolle, d.h. profitable und kreditwürdige Kunden an ein Unternehmen zu binden, kommen Business-Intelligence Technologien zur Generierung von Kundenwissen aus kundenbezogenen Daten zum Einsatz. Als technologische Plattform der Kommunikation und Interaktion gewähren Business Communities einen direkten Einblick in die Gedanken und Präferenzen der Kunden. Von Business-Communitybasiertem Wissen der Kunden und über Kunden können individuelle Kundenbedürfnisse, Verhaltensweisen und damit auch wertvolle (potenzielle, profilgleiche) Kunden abgeleitet werden, was eine differenziertere und selektivere Behandlung der Kunden möglich macht. Business Communities bieten ein umfassendes Datenpotenzial, welches jedoch bis dato für das CRM im Firmenkundengeschäft respektive die Profilbildung noch nicht genutzt wird. Synergiepotenziale von der Datenquelle "Business Community" und der Technologie "Business Intelligence" werden bislang vernachlässigt. An dieser Stelle setzt die Arbeit an. Das Ziel ist die sinnvolle Zusammenführung beider Ansätze zu einem erweiterten Ansatz für das Management der irmenkundenbeziehung. Dazu wird ein BIgestütztes CRM-Konzept für die Generierung, Analyse und Optimierung von Kundenwissen erarbeitet, welches speziell durch den Einsatz einer B2B-Community gewonnen und für eine Profilbildung genutzt wird. Es soll durch die Anbindung von Fremddatenbanken Optimierung finden: In den Prozess der Wissensgenerierung fließen zur Datenqualifizierung und -quantifizierung externe (Kunden-) Daten ein, die von Fremddatenbanken (wie z.B. Information Provider, Wirtschaftsauskunftsdienste) bereitgestellt werden. Der Kern dieser Zielsetzung liegt in der umfassenden Generierung und stetigen Optimierung von Wissen, das den Aufbau einer langfristigen, individuellen und wertvollen Kundenbeziehung unterstützen soll.
Resumo:
Cell-cell interactions during embryonic development are crucial in the co-ordination of growth, differentiation and maintenance of many different cell types. To achieve this co-ordination each cell must properly translate signals received from neighbouring cells, into spatially and temporally appropriate developmental responses. A surprisingly limited number of signal pathways are responsible for the differentiation of enormous variety of cell types. As a result, pathways are frequently 'reused' during development. Thus, in mammals the JAK/STAT pathway is required during early embryogenesis, mammary gland formation, hematopoiesis and, finally, plays a pivotal role in immune response. In the canonical way, the JAK/STAT pathway is represented by a transmembrane receptor associated with a Janus kinase (JAK), which upon stimulation by an extra-cellular ligand, phosphorylates itself, the receptor and, finally, the signal transducer and activator of transcription (STAT) molecules. Phosphorylated STATs dimerise and translocate to the nucleus where they activate transcription of target genes. The JAK/STAT pathway has been conserved throughout evolution, and all known components are present in the genome of Drosophila melanogaster. Besides hematopoietic and immunity functions, the pathway is also required during development for processes including embryonic segmentation, tracheal morphogenesis, posterior spiracle formation etc. This study describes Drosophila Ken&Barbie (Ken) as a selective regulator of JAK/STAT signalling. ken mutations identified in a screen for modulators of an eye overgrowth phenotype, caused by over-expression of the pathway ligand unpaired, also interact genetically with the pathway receptor domeless (dome) and the transcription factor stat92E. Over-expression of Ken can phenocopy developmental defects known to be caused by the loss of JAK/STAT signalling. These genetic interactions suggest that Ken may function as a negative regulator of the pathway. Ken has C-terminal Zn-finger domain, presumably for DNA binding, and N-terminal BTB/POZ domain, often found in transcriptional repressors. Using EGFP-fused construct expressed in vivo revealed nuclear accumulation of Ken. Therefore, it is proposed that Ken may act as a suppresser of STAT92E target genes. An in vitro assay, termed SELEX, determined that Ken specifically binds to a DNA sequence, with the essential for DNA recognition core overlapping that of STAT92E. This interesting observation suggests that not all STAT92E sites may also allow Ken binding. Strikingly, when effects of ectopic Ken on the expression of putative JAK/STAT pathway target genes were examined, only a subset of the genes tested, namely vvl, trh and kni, were down-regulated by Ken, whereas some others, such as eve and fj, appeared to be unresponsive. Further analysis of vvl, one of the genes susceptible to ectopic Ken, was undertaken. In the developing hindgut, expression of vvl is JAK/STAT pathway dependent, but remains repressed in the posterior spiracles, despite the stimulation of STAT92E by Upd in their primordia. Importantly, ken is also expressed in the developing posterior spiracles. Strikingly, up-regulation of vvl is observed in these tissues in ken mutant embryos. These imply that while ectopic Ken is sufficient to repress the expression of vvl in the hindgut, endogenous Ken is also necessary to prevent its activation in the posterior spiracles. It is therefore conceivable that ectopic vvl expression in the posterior spiracles of the ken mutants may be the result of de-repression of endogenous STAT92E activity. Another consequence of these observations is a fine balance that must exist between STAT92E and Ken activities. Apparently, endogenous level of Ken is sufficient to repress vvl, but not other, as yet unidentified, JAK/STAT pathway targets, whose presumable activation by STAT92E is required for posterior spiracle development as the embryos mutant for dome, the receptor of the pathway, show severe spiracle defects. These defects are also observed in the embryos mis-expressing Ken. Though it is possible that the posterior spiracle phenotype caused by higher levels of Ken results from a JAK/STAT pathway independent activity, it seems to be more likely that Ken acts in a dosage dependent manner, and extra Ken is able to further antagonise JAK/STAT pathway target genes. While STAT92E binding sites required for target gene expression have been poorly characterised, the existence of genome data allows the prediction of candidate STAT92E sites present in target genes promoters to be attempted. When a 6kb region containing the putative regulatory domains flanking the vvl locus are examined, only a single potential STAT92E binding site located 825bp upstream of the translational start can be detected. Strikingly, this site also includes a perfect Ken binding sequence. Such an in silico observation, though consistent with both Ken DNA binding assay in vitro and regulation of STAT92E target genes in vivo, however, requires further analysis. The JAK/STAT pathway is implicated in a variety of processes during embryonic and larval development as well as in imago. In each case, stimulation of the same transcription factor results in different developmental outcomes. While many potential mechanisms have been proposed and demonstrated to explain such pleiotropy, the present study indicates that Ken may represent another mechanism, with which signal transduction pathways are controlled. Ken selectively down-regulates a subset of potential target genes and so modifies the transcriptional profile generated by activated STAT92E - a mechanism, which may be partially responsible for differences in the morphogenetic processes elicited by JAK/STAT signalling during development.
Resumo:
Die Technologie dienstorientierter Architekturen (Service-oriented Architectures, kurz SOA) weckt große Visionen auf Seiten der Industrie wie auch der Forschung. Sie hat sich als derzeit ideale Lösung für Umgebungen, in denen sich die Anforderungen an die IT-Bedürfnisse rapide ändern, erwiesen. Heutige IT-Systeme müssen Managementaufgaben wie Softwareinstallation, -anpassung oder -austausch erlauben, ohne dabei den laufenden Betrieb wesentlich zu stören. Die dafür nötige Flexibilität bieten dienstorientierte Architekturen, in denen Softwarekomponenten in Form von Diensten zur Verfügung stehen. Ein Dienst bietet über seine Schnittstelle lokalen wie entfernten Applikationen einen Zugang zu seiner Funktionalität. Wir betrachten im Folgenden nur solche dienstorientierte Architekturen, in denen Dienste zur Laufzeit dynamisch entdeckt, gebunden, komponiert, verhandelt und adaptiert werden können. Eine Applikation kann mit unterschiedlichen Diensten arbeiten, wenn beispielsweise Dienste ausfallen oder ein neuer Dienst die Anforderungen der Applikation besser erfüllt. Eine unserer Grundvoraussetzungen lautet somit, dass sowohl das Dienstangebot als auch die Nachfrageseite variabel sind. Dienstorientierte Architekturen haben besonderes Gewicht in der Implementierung von Geschäftsprozessen. Im Rahmen des Paradigmas Enterprise Integration Architecture werden einzelne Arbeitsschritte als Dienste implementiert und ein Geschäftsprozess als Workflow von Diensten ausgeführt. Eine solche Dienstkomposition wird auch Orchestration genannt. Insbesondere für die so genannte B2B-Integration (Business-to-Business) sind Dienste das probate Mittel, um die Kommunikation über die Unternehmensgrenzen hinaus zu unterstützen. Dienste werden hier in der Regel als Web Services realisiert, welche vermöge BPEL4WS orchestriert werden. Der XML-basierte Nachrichtenverkehr und das http-Protokoll sorgen für eine Verträglichkeit zwischen heterogenen Systemen und eine Transparenz des Nachrichtenverkehrs. Anbieter dieser Dienste versprechen sich einen hohen Nutzen durch ihre öffentlichen Dienste. Zum einen hofft man auf eine vermehrte Einbindung ihrer Dienste in Softwareprozesse. Zum anderen setzt man auf das Entwickeln neuer Software auf Basis ihrer Dienste. In der Zukunft werden hunderte solcher Dienste verfügbar sein und es wird schwer für den Entwickler passende Dienstangebote zu finden. Das Projekt ADDO hat in diesem Umfeld wichtige Ergebnisse erzielt. Im Laufe des Projektes wurde erreicht, dass der Einsatz semantischer Spezifikationen es ermöglicht, Dienste sowohl im Hinblick auf ihre funktionalen als auch ihre nicht-funktionalen Eigenschaften, insbesondere die Dienstgüte, automatisch zu sichten und an Dienstaggregate zu binden [15]. Dazu wurden Ontologie-Schemata [10, 16], Abgleichalgorithmen [16, 9] und Werkzeuge entwickelt und als Framework implementiert [16]. Der in diesem Rahmen entwickelte Abgleichalgorithmus für Dienstgüte beherrscht die automatische Aushandlung von Verträgen für die Dienstnutzung, um etwa kostenpflichtige Dienste zur Dienstnutzung einzubinden. ADDO liefert einen Ansatz, Schablonen für Dienstaggregate in BPEL4WS zu erstellen, die zur Laufzeit automatisch verwaltet werden. Das Vorgehen konnte seine Effektivität beim internationalen Wettbewerb Web Service Challenge 2006 in San Francisco unter Beweis stellen: Der für ADDO entwickelte Algorithmus zur semantischen Dienstkomposition erreichte den ersten Platz. Der Algorithmus erlaubt es, unter einer sehr großenMenge angebotener Dienste eine geeignete Auswahl zu treffen, diese Dienste zu Dienstaggregaten zusammenzufassen und damit die Funktionalität eines vorgegebenen gesuchten Dienstes zu leisten. Weitere Ergebnisse des Projektes ADDO wurden auf internationalen Workshops und Konferenzen veröffentlicht. [12, 11]
Resumo:
Summary: Productivity and forage quality of legume-grass swards are important factors for successful arable farming in both organic and conventional farming systems. For these objectives the botanical composition of the swards is of particular importance, especially, the content of legumes due to their ability to fix airborne nitrogen. As it can vary considerably within a field, a non-destructive detection method while doing other tasks would facilitate a more targeted sward management and could predict the nitrogen supply of the soil for the subsequent crop. This study was undertaken to explore the potential of digital image analysis (DIA) for a non destructive prediction of legume dry matter (DM) contribution of legume-grass mixtures. For this purpose an experiment was conducted in a greenhouse, comprising a sample size of 64 experimental swards such as pure swards of red clover (Trifolium pratense L.), white clover (Trifolium repens L.) and lucerne (Medicago sativa L.) as well as binary mixtures of each legume with perennial ryegrass (Lolium perenne L.). Growth stages ranged from tillering to heading and the proportion of legumes from 0 to 80 %. Based on digital sward images three steps were considered in order to estimate the legume contribution (% of DM): i) The development of a digital image analysis (DIA) procedure in order to estimate legume coverage (% of area). ii) The description of the relationship between legume coverage (% area) and legume contribution (% of DM) derived from digital analysis of legume coverage related to the green area in a digital image. iii) The estimation of the legume DM contribution with the findings of i) and ii). i) In order to evaluate the most suitable approach for the estimation of legume coverage by means of DIA different tools were tested. Morphological operators such as erode and dilate support the differentiation of objects of different shape by shrinking and dilating objects (Soille, 1999). When applied to digital images of legume-grass mixtures thin grass leaves were removed whereas rounder clover leaves were left. After this process legume leaves were identified by threshold segmentation. The segmentation of greyscale images turned out to be not applicable since the segmentation between legumes and bare soil failed. The advanced procedure comprising morphological operators and HSL colour information could determine bare soil areas in young and open swards very accurately. Also legume specific HSL thresholds allowed for precise estimations of legume coverage across a wide range from 11.8 - 72.4 %. Based on this legume specific DIA procedure estimated legume coverage showed good correlations with the measured values across the whole range of sward ages (R2 0.96, SE 4.7 %). A wide range of form parameters (i.e. size, breadth, rectangularity, and circularity of areas) was tested across all sward types, but none did improve prediction accuracy of legume coverage significantly. ii) Using measured reference data of legume coverage and contribution, in a first approach a common relationship based on all three legumes and sward ages of 35, 49 and 63 days was found with R2 0.90. This relationship was improved by a legume-specific approach of only 49- and 63-d old swards (R2 0.94, 0.96 and 0.97 for red clover, white clover, and lucerne, respectively) since differing structural attributes of the legume species influence the relationship between these two parameters. In a second approach biomass was included in the model in order to allow for different structures of swards of different ages. Hence, a model was developed, providing a close look on the relationship between legume coverage in binary legume-ryegrass communities and the legume contribution: At the same level of legume coverage, legume contribution decreased with increased total biomass. This phenomenon may be caused by more non-leguminous biomass covered by legume leaves at high levels of total biomass. Additionally, values of legume contribution and coverage were transformed to the logit-scale in order to avoid problems with heteroscedasticity and negative predictions. The resulting relationships between the measured legume contribution and the calculated legume contribution indicated a high model accuracy for all legume species (R2 0.93, 0.97, 0.98 with SE 4.81, 3.22, 3.07 % of DM for red clover, white clover, and lucerne swards, respectively). The validation of the model by using digital images collected over field grown swards with biomass ranges considering the scope of the model shows, that the model is able to predict legume contribution for most common legume-grass swards (Frame, 1992; Ledgard and Steele, 1992; Loges, 1998). iii) An advanced procedure for the determination of legume DM contribution by DIA is suggested, which comprises the inclusion of morphological operators and HSL colour information in the analysis of images and which applies an advanced function to predict legume DM contribution from legume coverage by considering total sward biomass. Low residuals between measured and calculated values of legume dry matter contribution were found for the separate legume species (R2 0.90, 0.94, 0.93 with SE 5.89, 4.31, 5.52 % of DM for red clover, white clover, and lucerne swards, respectively). The introduced DIA procedure provides a rapid and precise estimation of legume DM contribution for different legume species across a wide range of sward ages. Further research is needed in order to adapt the procedure to field scale, dealing with differing light effects and potentially higher swards. The integration of total biomass into the model for determining legume contribution does not necessarily reduce its applicability in practice as a combined estimation of total biomass and legume coverage by field spectroscopy (Biewer et al. 2009) and DIA, respectively, may allow for an accurate prediction of the legume contribution in legume-grass mixtures.
Resumo:
Die Automobilindustrie reagiert mit Modularisierungsstrategien auf die zunehmende Produktkomplexität, getrieben durch die wachsenden Individualisierungsanforde-rungen auf der Kundenseite und der Modellpolitik mit neuen Fahrzeuganläufen. Die Hersteller verlagern die Materialbereitstellungskomplexität durch Outsourcing an die nächste Zulieferebene, den First Tier Suppliern, die seit Beginn der 90er Jahre zunehmend in Zulieferparks in unmittelbarer Werknähe integriert werden. Typische Merkmale eines klassischen Zulieferparks sind: Bereitstellung einer Halleninfrastruktur mit Infrastrukturdienstleistungen, Anlieferung der Teileumfänge im JIS-Verfahren (Just-in-Sequence = reihenfolgegenaue Synchronisation), lokale Wertschöpfung (Vormontagen, Sequenzierung) des Zulieferers, Vertragsbindung der First Tier Zulieferer für die Dauer eines Produktlebenszyklus und Einbindung eines Logistikdienstleisters. Teilweise werden zur Finanzierung Förderprojekte des öffent-lichen Sektors initiiert. Bisher fehlte eine wissenschaftliche Bearbeitung dieses Themas "Zulieferpark". In der Arbeit werden die in Europa entstandenen Zulieferparks näher untersucht, um Vor- und Nachteile dieses Logistikkonzeptes zu dokumentieren und Entwicklungs-trends aufzuzeigen. Abgeleitet aus diesen Erkenntnissen werden Optimierungs-ansätze aufgezeigt und konkrete Entwicklungspfade zur Verbesserung der Chancen-Risikoposition der Hauptakteure Automobilhersteller, Zulieferer und Logistikdienst-leister beschrieben. Die Arbeit gliedert sich in vier Haupteile, einer differenzierten Beschreibung der Ausgangssituation und den Entwicklungstrends in der Automobilindustrie, dem Vorgehensmodell, der Dokumentation der Analyseergebnisse und der Bewertung von Zulieferparkmodellen. Im Rahmen der Ergebnisdokumentation des Analyseteils werden vier Zulieferparkmodelle in detaillierten Fallstudien anschaulich dargestellt. Zur Erarbeitung der Analyseergebnisse wurde eine Befragung der Hauptakteure mittels strukturierten Fragebögen durchgeführt. Zur Erhebung von Branchentrends und zur relativen Bewertung der Parkmodelle wurden zusätzlich Experten befragt. Zur Segmentierung der Zulieferparklandschaft wurde die Methode der Netzwerk-analyse eingesetzt. Die relative Bewertung der Nutzenposition basiert auf der Nutzwertanalyse. Als Ergebnisse der Arbeit liegen vor: · Umfassende Analyse der Zulieferparklandschaft in Europa, · Segmentierung der Parks in Zulieferparkmodelle, Optimierungsansätze zur Verbesserung einer Win-Win-Situation der beteiligten Hauptakteure, · Relative Nutzenbewertung der Zulieferparkmodelle, · Entwicklungspfade für klassische Zulieferparks.
Resumo:
E-Business, verstanden als ganzheitliche Strategie zur Reorganisation von Geschäftsprozessen, Strukturen und Beziehungen in Unternehmen, bietet für die Arbeitsgestaltung in einer digital vernetzten Welt Chancen und Risiken in Hinblick auf die Humankriterien. Empirische Untersuchungen in 14 Unternehmen zeigen „good practice“-Ansätze im B2B-Feld (Business-to-Business). Untersucht wurden die Tätigkeiten der elektronisch vernetzten Auftragsbearbeitung, des Web-, Content-Managements, der digitalen Druckvorlagenherstellung sowie der CAD- Bauplanzeichnung. Die beobachteten Arbeitsplätze zeigen, dass Arbeitsinhalte eher ganzheitlich und komplex gestaltet sind. Planende, ausführende, kontrollierende und organisierende Anteile weisen auf eine vielfältige Aufgabengestaltung hin, die hohe Anforderungen beinhaltet. Während alle beobachteten Tätigkeiten mit Aufnahme-, Erarbeitungs-, Verarbeitungs-, Übertragungs- und Weitergabeprozessen von Informationen zu tun haben, gibt es Differenzen in Bezug auf den Arbeitsumfang, den Zeitdruck, Fristsetzungen, erwartete Arbeitsleistungen sowie die Planbarkeit der Aufgaben. Die vorgefundenen Aufgabentypen (wenig bis sehr anforderungsreich im Sinne von Denk- und Planungsanforderungen) sind gekennzeichnet durch eine unterschiedlich ausgeprägte Aufgabenkomplexität. Interessant ist, dass, je anforderungsreicher die Aufgabengestaltung, je höher die Aufgabenkomplexität, je größer die Wissensintensität und je niedriger die Planbarkeit ist, desto größer sind die Freiräume in der Aufgabenausführung. Das heißt wiederum, dass bei zunehmenden E-Business-Anteilen mehr Gestaltungsspielräume zur Verfügung stehen. Die bestehenden Chancen auf eine humane Aufgabengestaltung sind umso größer, je höher die E-Business-Anteile in der Arbeit sind. Diese Wirkung findet sich auch bei einem Vergleich der Aufgabenbestandteile wieder. Die negativen Seiten des E-Business zeigen sich in den vorgefundenen Belastungen, die auf die Beschäftigten einwirken. Diskutiert wird die Verschiebung von körperlichen hin zu psychischen und vorrangig informatorischen Belastungen. Letztere stellen ein neues Belastungsfeld dar. Ressourcen, auf welche die Mitarbeiter zurückgreifen können, sind an allen Arbeitsplätzen vorhanden, allerdings unterschiedlich stark ausgeprägt. Personale, organisationale, soziale, aufgabenbezogene und informatorische Ressourcen, die den Beschäftigten zur Verfügung stehen, werden angesprochen. In Bezug auf die Organisationsgestaltung sind positive Ansätze in den untersuchten E-Business-Unternehmen zu beobachten. Der Großteil der untersuchten Betriebe hat neue Arbeitsorganisationskonzepte realisiert, wie die vorgefundenen kooperativen Organisationselemente zeigen. Die kooperativen Organisationsformen gehen allerdings nicht mit einer belastungsärmeren Gestaltung einher. Das vorgefundene breite Spektrum, von hierarchisch organisierten Strukturen bis hin zu prozess- und mitarbeiterorientierten Organisationsstrukturen, zeigt, dass Organisationsmodelle im E-Business gestaltbar sind. Neuen Anforderungen kann insofern gestaltend begegnet und somit die Gesundheit und das Wohlbefinden der Mitarbeiter positiv beeinflusst werden. Insgesamt betrachtet, zeigt E-Business ein ambivalentes Gesicht, das auf der Basis des MTO-Modells (Mensch-Technik-Organisation) von Uhlich (1994) diskutiert wird, indem vernetzte Arbeitsprozesse auf personeller, technischer sowie organisationaler Ebene betrachtet werden. E-business, seen as more than only the transformation of usual business processes into digital ones, furthermore as an instrument of reorganisation of processes and organisation structures within companies, offers chances for a human oriented work organisation. Empirical data of 14 case studies provide good practice approaches in the field of B2B (Business-to-Business). The observed work contents show, that tasks (e.g. order processing, web-, contentmanagement, first print manufacturing and architectural drawing) are well arranged. Executive, organising, controlling and coordinating parts constitute a diversified work content, which can be organised with high demands. Interesting is the result, that the more e-business-parts are within the work contents, on the one hand the higher are the demands of the type of work and on the other hand the larger is the influence on workmanship. The observed enterprises have realised new elements of work organisation, e.g. flexible working time, cooperative leadership or team work. The direct participation of the employees can be strengthened, in particular within the transformation process. Those companies in which the employees were early and well informed about the changes coming up with e-business work, the acceptance for new technique and new processes is higher than in companies which did not involve the person concerned. Structured in an ergonomic way, there were found bad patterns of behaviour concerning ergonomic aspects, because of missing knowledge regarding work-related ergonomic expertise by the employees. E-business indicates new aspects concerning requirements – new in the field of informational demands, as a result of poorly conceived technical balance in the researched SME. Broken systems cause interruptions, which increase the pressure of time all the more. Because of the inadequate usability of software-systems there appear in addition to the informational strains also elements of psychological stress. All in all, work contents and work conditions can be shaped and as a result the health and well-being of e-business-employees can be influenced: Tasks can be structured and organised in a healthfulness way, physiological strain and psychological stress are capable of being influenced, resources are existent and developable, a human work design within e-business structures is possible. The ambivalent face of e-business work is discussed on the basis of the MTO- (Mensch-Technik-Organisation) model (Ulich 1994). Thereby new and interesting results of researches are found out, concerning the personal/human side, the technical side and the organisational side of e-business work.
Resumo:
This thesis takes an interdisciplinary approach to the study of color vision, focussing on the phenomenon of color constancy formulated as a computational problem. The primary contributions of the thesis are (1) the demonstration of a formal framework for lightness algorithms; (2) the derivation of a new lightness algorithm based on regularization theory; (3) the synthesis of an adaptive lightness algorithm using "learning" techniques; (4) the development of an image segmentation algorithm that uses luminance and color information to mark material boundaries; and (5) an experimental investigation into the cues that human observers use to judge the color of the illuminant. Other computational approaches to color are reviewed and some of their links to psychophysics and physiology are explored.
Resumo:
A key problem in object recognition is selection, namely, the problem of identifying regions in an image within which to start the recognition process, ideally by isolating regions that are likely to come from a single object. Such a selection mechanism has been found to be crucial in reducing the combinatorial search involved in the matching stage of object recognition. Even though selection is of help in recognition, it has largely remained unsolved because of the difficulty in isolating regions belonging to objects under complex imaging conditions involving occlusions, changing illumination, and object appearances. This thesis presents a novel approach to the selection problem by proposing a computational model of visual attentional selection as a paradigm for selection in recognition. In particular, it proposes two modes of attentional selection, namely, attracted and pay attention modes as being appropriate for data and model-driven selection in recognition. An implementation of this model has led to new ways of extracting color, texture and line group information in images, and their subsequent use in isolating areas of the scene likely to contain the model object. Among the specific results in this thesis are: a method of specifying color by perceptual color categories for fast color region segmentation and color-based localization of objects, and a result showing that the recognition of texture patterns on model objects is possible under changes in orientation and occlusions without detailed segmentation. The thesis also presents an evaluation of the proposed model by integrating with a 3D from 2D object recognition system and recording the improvement in performance. These results indicate that attentional selection can significantly overcome the computational bottleneck in object recognition, both due to a reduction in the number of features, and due to a reduction in the number of matches during recognition using the information derived during selection. Finally, these studies have revealed a surprising use of selection, namely, in the partial solution of the pose of a 3D object.
Resumo:
There has been recent interest in using temporal difference learning methods to attack problems of prediction and control. While these algorithms have been brought to bear on many problems, they remain poorly understood. It is the purpose of this thesis to further explore these algorithms, presenting a framework for viewing them and raising a number of practical issues and exploring those issues in the context of several case studies. This includes applying the TD(lambda) algorithm to: 1) learning to play tic-tac-toe from the outcome of self-play and of play against a perfectly-playing opponent and 2) learning simple one-dimensional segmentation tasks.
Resumo:
We present a statistical image-based shape + structure model for Bayesian visual hull reconstruction and 3D structure inference. The 3D shape of a class of objects is represented by sets of contours from silhouette views simultaneously observed from multiple calibrated cameras. Bayesian reconstructions of new shapes are then estimated using a prior density constructed with a mixture model and probabilistic principal components analysis. We show how the use of a class-specific prior in a visual hull reconstruction can reduce the effect of segmentation errors from the silhouette extraction process. The proposed method is applied to a data set of pedestrian images, and improvements in the approximate 3D models under various noise conditions are shown. We further augment the shape model to incorporate structural features of interest; unknown structural parameters for a novel set of contours are then inferred via the Bayesian reconstruction process. Model matching and parameter inference are done entirely in the image domain and require no explicit 3D construction. Our shape model enables accurate estimation of structure despite segmentation errors or missing views in the input silhouettes, and works even with only a single input view. Using a data set of thousands of pedestrian images generated from a synthetic model, we can accurately infer the 3D locations of 19 joints on the body based on observed silhouette contours from real images.
Resumo:
This thesis presents a perceptual system for a humanoid robot that integrates abilities such as object localization and recognition with the deeper developmental machinery required to forge those competences out of raw physical experiences. It shows that a robotic platform can build up and maintain a system for object localization, segmentation, and recognition, starting from very little. What the robot starts with is a direct solution to achieving figure/ground separation: it simply 'pokes around' in a region of visual ambiguity and watches what happens. If the arm passes through an area, that area is recognized as free space. If the arm collides with an object, causing it to move, the robot can use that motion to segment the object from the background. Once the robot can acquire reliable segmented views of objects, it learns from them, and from then on recognizes and segments those objects without further contact. Both low-level and high-level visual features can also be learned in this way, and examples are presented for both: orientation detection and affordance recognition, respectively. The motivation for this work is simple. Training on large corpora of annotated real-world data has proven crucial for creating robust solutions to perceptual problems such as speech recognition and face detection. But the powerful tools used during training of such systems are typically stripped away at deployment. Ideally they should remain, particularly for unstable tasks such as object detection, where the set of objects needed in a task tomorrow might be different from the set of objects needed today. The key limiting factor is access to training data, but as this thesis shows, that need not be a problem on a robotic platform that can actively probe its environment, and carry out experiments to resolve ambiguity. This work is an instance of a general approach to learning a new perceptual judgment: find special situations in which the perceptual judgment is easy and study these situations to find correlated features that can be observed more generally.