114 resultados para Workstation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os objetivos desta pesquisa foram levantar os benefícios auferidos pela aplicação de melhorias na ergonomia dos postos de trabalho e no abastecimento planejado de peças em uma linha de montagem automotiva, bem como identificar e verificar até que ponto pode-se gerar vantagens competitivas na redução do tempo de montagem do veículo, com a aplicação de investimentos tecnológicos em ergonomia na área de manufatura. A metodologia Methods Time Measurement (MTM), foi escolhida para mensurar as diferenças de tempos de processo, e para coleta e identificação de dados. Foram observadas duas linhas de montagem: a primeira denominada de inovadora, construída há três anos com investimentos em soluções ergonômicas, tanto no abastecimento quanto no processo, e outra, tradicional, construída há 20 anos, com poucos investimentos na área. De posse dos dados necessários dos sistemas estudados e com o uso da tecnologia MTM, a pesquisa avalia e propõe meios de mensurar os ganhos com a redução de atividades que não agregam valor ao produto, com o intuito de viabilizar investimentos em ergonomia em postos de trabalho padronizados, manipuladores, instalações mais modernas e até possuir um time de planejamento de processos de produção mais robusto. Neste trabalho, analisa-se também a influência da ergonomia no custo do produto final, qualidade, retrabalhos, afastamentos médicos e absenteísmo entre outros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El presente trabajo pretende demostrar que no es recomendable la introducción de un cultivo tropical como el mango (Mangifera indica) dentro del área mediterránea del valle del Guadalhorce (Málaga, España). Es un cultivo en expansión por su rentabilidad económica, pero es necesario atender a los posibles riesgos climáticos para su explotación (tras la gran inversión que requiere). Para justificar esta afirmación se presenta un estudio agroclimático realizado en las parcelas experimentales de la finca de IFAPA (Instituto de Investigación y Formación Agraria y Pesquera de Andalucía) de Churriana (Málaga). Los resultados se obtienen a través de los datos de una estación meteorológica y una modelización territorial a partir de herramientas de análisis espacial con SIG. Se tienen en cuenta las variables térmicas y los vientos como condicionantes principales de la aparición de la necrosis apical: patología mortal para el mango.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has been a significant increase in the incidence of musculoskeletal disorders (MSD) and the costs associated with these are predicted to increase as the popularity of computer use increases at home, school and work. Risk factors have been identified in the adult population but little is known about the risk factors for children and youth. Research has demonstrated that they are not immune to this risk and that they are self reporting the same pain as adults. The purpose of the study was to examine children’s postures while working at computer workstations under two conditions. One was at an ergonomically adjusted children’s workstation while the second was at an average adult workstation. A Polhemus Fastrak™ system was used to record the children’s postures and joint and segment angles were quantified. Results of the study showed that children reported more discomfort and effort at the adult workstation. Segment and joint angles showed significant differences through the upper limb at the adult workstation. Of significance was the strategy of shoulder abduction and flexion that the children used in order to place their hand on the mouse. Ulnar deviation was also greater at the adult workstation as was neck extension. All of these factors have been identified in the literature as increasing the risk for injury. A comparison of the children’s posture while playing at the children’s workstation verses the adult workstation, showed that the postural angles assumed by the children at an adult workstation exceeded the Occupational Safety and Health Association (OSHA) recommendations. Further investigation is needed to increase our knowledge of MSD in children as their potential for long term damage has yet to be determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Em Bioinformática são frequentes problemas cujo tratamento necessita de considerável poder de processamento/cálculo e/ou grande capacidade de armazenamento de dados e elevada largura de banda no acesso aos mesmos (de forma não comprometer a eficiência do seu processamento). Um exemplo deste tipo de problemas é a busca de regiões de similaridade em sequências de amino-ácidos de proteínas, ou em sequências de nucleótidos de DNA, por comparação com uma dada sequência fornecida (query sequence). Neste âmbito, a ferramenta computacional porventura mais conhecida e usada é o BLAST (Basic Local Alignment Search Tool) [1]. Donde, qualquer incremento no desempenho desta ferramenta tem impacto considerável (desde logo positivo) na atividade de quem a utiliza regularmente (seja para investigação, seja para fins comerciais). Precisamente, desde que o BLAST foi inicialmente introduzido, foram surgindo diversas versões, com desempenho melhorado, nomeadamente através da aplicação de técnicas de paralelização às várias fases do algoritmo (e. g., partição e distribuição das bases de dados a pesquisar, segmentação das queries, etc. ), capazes de tirar partido de diferentes ambientes computacionais de execução paralela, como: máquinas multi-core (BLAST+ 2), clusters de nós multi-core (mpiBLAST3J e, mais recentemente, co-processadores aceleradores como GPUs" ou FPGAs. É também possível usar as ferramentas da família BLAST através de um interface/sítio WEB5, que permite, de forma expedita, a pesquisa de uma variedade de bases de dados conhecidas (e em permanente atualização), com tempos de resposta suficientemente pequenos para a maioria dos utilizadores, graças aos recursos computacionais de elevado desempenho que sustentam o seu backend. Ainda assim, esta forma de utilização do BLAST poderá não ser a melhor opção em algumas situações, como por exemplo quando as bases de dados a pesquisar ainda não são de domínio público, ou, sendo-o, não estão disponíveis no referido sitio WEB. Adicionalmente, a utilização do referido sitio como ferramenta de trabalho regular pressupõe a sua disponibilidade permanente (dependente de terceiros) e uma largura de banda de qualidade suficiente, do lado do cliente, para uma interacção eficiente com o mesmo. Por estas razões, poderá ter interesse (ou ser mesmo necessário) implantar uma infra-estrutura BLAST local, capaz de albergar as bases de dados pertinentes e de suportar a sua pesquisa da forma mais eficiente possível, tudo isto levando em conta eventuais constrangimentos financeiros que limitam o tipo de hardware usado na implementação dessa infra-estrutura. Neste contexto, foi realizado um estudo comparativo de diversas versões do BLAST, numa infra-estrutura de computação paralela do IPB, baseada em componentes commodity: um cluster de 8 nós (virtuais, sob VMWare ESXi) de computação (com CPU Í7-4790K 4GHz, 32GB RAM e 128GB SSD) e um nó dotado de uma GPU (CPU Í7-2600 3.8GHz, 32GB RAM, 128 GB SSD, 1 TB HD, NVIDIA GTX 580). Assim, o foco principal incidiu na avaliação do desempenho do BLAST original e do mpiBLAST, dado que são fornecidos de base na distribuição Linux em que assenta o cluster [6]. Complementarmente, avaliou-se também o BLAST+ e o gpuBLAST no nó dotado de GPU. A avaliação contemplou diversas configurações de recursos, incluindo diferentes números de nós utilizados e diferentes plataformas de armazenamento das bases de dados (HD, SSD, NFS). As bases de dados pesquisadas correspondem a um subconjunto representativo das disponíveis no sitio WEB do BLAST, cobrindo uma variedade de dimensões (desde algumas dezenas de MBytes, até à centena de GBytes) e contendo quer sequências de amino-ácidos (env_nr e nr), quer de nucleótidos (drosohp. nt, env_nt, mito. nt, nt e patnt). Para as pesquisas foram 'usadas sequências arbitrárias de 568 letras em formato FASTA, e adoptadas as opções por omissão dos vários aplicativos BLAST. Salvo menção em contrário, os tempos de execução considerados nas comparações e no cálculo de speedups são relativos à primeira execução de uma pesquisa, não sendo assim beneficiados por qualquer efeito de cache; esta opção assume um cenário real em que não é habitual que uma mesma query seja executada várias vezes seguidas (embora possa ser re-executada, mais tarde). As principais conclusões do estudo comparativo realizado foram as seguintes: - e necessário acautelar, à priori, recursos de armazenamento com capacidade suficiente para albergar as bases de dados nas suas várias versões (originais/compactadas, descompactadas e formatadas); no nosso cenário de teste a coexistência de todas estas versões consumiu 600GBytes; - o tempo de preparação (formataçâo) das bases de dados para posterior pesquisa pode ser considerável; no nosso cenário experimental, a formatação das bases de dados mais pesadas (nr, env_nt e nt) demorou entre 30m a 40m (para o BLAST), e entre 45m a 55m (para o mpiBLAST); - embora economicamente mais onerosos, a utilização de discos de estado sólido, em alternativa a discos rígidos tradicionais, permite melhorar o tempo da formatação das bases de dados; no entanto, os benefícios registados (à volta de 9%) ficam bastante aquém do inicialmente esperado; - o tempo de execução do BLAST é fortemente penalizado quando as bases de dados são acedidas através da rede, via NFS; neste caso, nem sequer compensa usar vários cores; quando as bases de dados são locais e estão em SSD, o tempo de execução melhora bastante, em especial com a utilização de vários cores; neste caso, com 4 cores, o speedup chega a atingir 3.5 (sendo o ideal 4) para a pesquisa de BDs de proteínas, mas não passa de 1.8 para a pesquisa de BDs de nucleótidos; - o tempo de execução do mpiBLAST é muito prejudicado quando os fragmentos das bases de dados ainda não se encontram nos nós do cluster, tendo que ser distribuídos previamente à pesquisa propriamente dita; após a distribuição, a repetição das mesmas queries beneficia de speedups de 14 a 70; porém, como a mesma base de dados poderá ser usada para responder a diferentes queries, então não é necessário repetir a mesma query para amortizar o esforço de distribuição; - no cenário de teste, a utilização do mpiBLAST com 32+2 cores, face ao BLAST com 4 cores, traduz-se em speedups que, conforme a base de dados pesquisada (e previamente distribuída), variam entre 2 a 5, valores aquém do máximo teórico de 6.5 (34/4), mas ainda assim demonstradores de que, havendo essa possibilidade, compensa realizar as pesquisas em cluster; explorar vários cores) e com o gpuBLAST, realizada no nó com GPU (representativo de uma workstation típica), permite aferir qual a melhor opção no caso de não serem possíveis pesquisas em cluster; as observações realizadas indicam que não há diferenças significativas entre o BLAST e o BLAST+; adicionalmente, o desempenho do gpuBLAST foi sempre pior (aproximadmente em 50%) que o do BLAST e BLAST+, o que pode encontrar explicação na longevidade do modelo da GPU usada; - finalmente, a comparação da melhor opção no nosso cenário de teste, representada pelo uso do mpiBLAST, com o recurso a pesquisa online, no site do BLAST5, revela que o mpiBLAST apresenta um desempenho bastante competitivo com o BLAST online, chegando a ser claramente superior se se considerarem os tempos do mpiBLAST tirando partido de efeitos de cache; esta assunção acaba por se justa, Já que BLAST online também rentabiliza o mesmo tipo de efeitos; no entanto, com tempos de pequisa tão reduzidos (< 30s), só é defensável a utilização do mpiBLAST numa infra-estrutura local se o objetivo for a pesquisa de Bds não pesquisáveis via BLAS+ online.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using a computer keyboard with the forearms unsupported has been proposed as a causal factor for neck/shoulder and arm/hand diagnoses. Recent laboratory and field studies have demonstrated that forearm support might be preferable to working in the traditional floating posture. The aim of this study was to determine whether providing forearm Support when using a normal computer workstation would decrease musculoskeletal discomfort in intensive computer users in a call centre. A randomised controlled study (n = 59), of 6 weeks duration was conducted. Thirty participants (Group 1) were allocated to forearm support using the desk surface with the remainder (Group 2) acting as a control group. At 6 weeks, the control group was also set up with forearm support. Both groups were then monitored for another 6 weeks. Questionnaires were used at 1, 6 and 12 weeks to obtain information about discomfort, workstation setup, working posture and comfort. Nine participants (Group 1 n = 6, Group 2 n = 3) withdrew within a week of commencing forearm support either due to discomfort or difficulty in maintaining the posture. At 6 weeks, the group using forearm support generated significantly fewer reports of discomfort in the neck and back, although the difference between the groups was not statistically significant. At 12 weeks, there were fewer reports of neck, back and wrist discomfort when preintervention discomfort was compared with post intervention discomfort. These findings indicate that for the majority of users, forearm support may be preferable to the floating Posture implicit in current guidelines for computer workstation setup. (C) 2004 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose. This article explores the experiences of 26 assistive technology (AT) users having a range of physical impairments as they optimized their use of technology in the workplace. Method. A qualitative research design was employed using in-depth, open-ended interviews and observations of AT users in the workplace. Results. Participants identified many factors that limited their use of technology such as discomfort and pain, limited knowledge of the technology's features, and the complexity of the technology. The amount of time required for training, limited work time available for mastery, cost of training and limitations of the training provided, resulted in an over-reliance on trial and error and informal support networks and a sense of isolation. AT users enhanced their use of technology by addressing the ergonomics of the workstation and customizing the technology to address individual needs and strategies. Other key strategies included tailored training and learning support as well as opportunities to practice using the technology and explore its features away from work demands. Conclusions. This research identified structures important for effective AT use in the workplace which need to be put in place to ensure that AT users are able to master and optimize their use of technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Este estudo visa pesquisar e compreender o fenômeno social da Representação dos Trabalhadores no Local de Trabalho na região do Grande ABC. Inicialmente, através de uma revisão de literatura, traçar um histórico da comissão de fábrica e organizações semelhantes pelo mundo, observando sua prática nestes países e abordando administração participativa e o socialismo europeu e a sua autodeterminação. Na sequência abordar a comissão de fábrica no Brasil, narrar a história da primeira comissão de fábrica oficial instalada no país, na fábrica da Ford em São Bernardo do Campo. Segue estudo de tabulação de pesquisa de campo efetuada, com ênfase aos seguintes aspectos: constata-se a prática da RLT pelas empresas; constitui-se a RLT através de empregados indicados pelos trabalhadores, empresas ou pelos sindicatos de trabalhadores; regulamenta-se a RLT através de estatuto; efetiva-se a participação e influência do sindicato dos trabalhadores na RLT; a quais interesses atende a RLT, empresas, sindicatos de trabalhadores ou trabalhadores. A metodologia a ser utilizada é qualitativa, seguida de pesquisa de campo realizada em grupo, com entrelaçamento destes dados com a experiência profissional do autor. A conclusão do estudo é que a RLT é pouco praticada, seus membros são indicados pelos trabalhadores e respectivos sindicatos, prevalece a RLT regulamentada, havendo participação e influência dos sindicatos de trabalhadores. A RLT atende prioritariamente aos interesses das empresas, seguido dos interesses dos sindicatos de trabalhadores e por último, os interesses dos trabalhadores.(AU)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Phakometric measurements of corneal and crystalline lens surface alignment are influenced by corneal asymmetry in which the corneal apex does not coincide with the limbal centre. The purpose of this study was to determine the horizontal separation (e) between these corneal landmarks. Measurements were made in 60 normal eyes (30 subjects) using the Orbscan Ilz corneal analysis workstation. Our results show that both corneal landmarks typically coincide, so that e = 0, but that inter-subject variations of about ±1 mm can be expected (so that the corneal apex may fall nasal or temporal to the visual axis). This suggests that no correction for corneal asymmetry is required when estimating average amounts of ocular alignment from samples of eyes but that the measurement of e is strongly recommended for measurements in individual eyes. © 2004 The College of Optometrists.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The pattern of illumination on an undulating surface can be used to infer its 3-D form (shape from shading). But the recovery of shape would be invalid if the shading actually arose from reflectance variation. When a corrugated surface is painted with an albedo texture, the variation in local mean luminance (LM) due to shading is accompanied by a similar modulation in texture amplitude (AM). This is not so for reflectance variation, nor for roughly textured surfaces. We used a haptic matching technique to show that modulations of texture amplitude play a role in the interpretation of shape from shading. Observers were shown plaid stimuli comprising LM and AM combined in-phase (LM+AM) on one oblique and in anti-phase (LM-AM) on the other. Stimuli were presented via a modified ReachIN workstation allowing the co-registration of visual and haptic stimuli. In the first experiment, observers were asked to adjust the phase of a haptic surface, which had the same orientation as the LM+AM combination, until its peak in depth aligned with the visually perceived peak. The resulting alignments were consistent with the use of a lighting-from-above prior. In the second experiment, observers were asked to adjust the amplitude of the haptic surface to match that of the visually perceived surface. Observers chose relatively large amplitude settings when the haptic surface was oriented and phase-aligned with the LM+AM cue. When the haptic surface was aligned with the LM-AM cue, amplitude settings were close to zero. Thus the LM/AM phase relation is a significant visual depth cue, and is used to discriminate between shading and reflectance variations. [Supported by the Engineering and Physical Sciences Research Council, EPSRC].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Grafting of antioxidants and other modifiers onto polymers by reactive extrusion, has been performed successfully by the Polymer Processing and Performance Group at Aston University. Traditionally the optimum conditions for the grafting process have been established within a Brabender internal mixer. Transfer of this batch process to a continuous processor, such as an extruder, has, typically, been empirical. To have more confidence in the success of direct transfer of the process requires knowledge of, and comparison between, residence times, mixing intensities, shear rates and flow regimes in the internal mixer and in the continuous processor.The continuous processor chosen for the current work in the closely intermeshing, co-rotating twin-screw extruder (CICo-TSE). CICo-TSEs contain screw elements that convey material with a self-wiping action and are widely used for polymer compounding and blending. Of the different mixing modules contained within the CICo-TSE, the trilobal elements, which impose intensive mixing, and the mixing discs, which impose extensive mixing, are of importance when establishing the intensity of mixing. In this thesis, the flow patterns within the various regions of the single-flighted conveying screw elements and within both the trilobal element and mixing disc zones of a Betol BTS40 CICo-TSE, have been modelled using the computational fluid dynamics package Polyflow. A major obstacle encountered when solving the flow problem within all of these sets of elements, arises from both the complex geometry and the time-dependent flow boundaries as the elements rotate about their fixed axes. Simulation of the time dependent boundaries was overcome by selecting a number of sequential 2D and 3D geometries, used to represent partial mixing cycles. The flow fields were simulated using the ideal rheological properties of polypropylene and characterised in terms of velocity vectors, shear stresses generated and a parameter known as the mixing efficiency. The majority of the large 3D simulations were performed on the Cray J90 supercomputer situated at the Rutherford-Appleton laboratories, with pre- and postprocessing operations achieved via a Silicon Graphics Indy workstation. A mechanical model was constructed consisting of various CICo-TSE elements rotating within a transparent outer barrel. A technique has been developed using coloured viscous clays whereby the flow patterns and mixing characteristics within the CICo-TSE may be visualised. In order to test and verify the simulated predictions, the patterns observed within the mechanical model were compared with the flow patterns predicted by the computational model. The flow patterns within the single-flighted conveying screw elements in particular, showed good agreement between the experimental and simulated results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fault tree analysis is used as a tool within hazard and operability (Hazop) studies. The present study proposes a new methodology for obtaining the exact TOP event probability of coherent fault trees. The technique uses a top-down approach similar to that of FATRAM. This new Fault Tree Disjoint Reduction Algorithm resolves all the intermediate events in the tree except OR gates with basic event inputs so that a near minimal cut sets expression is obtained. Then Bennetts' disjoint technique is applied and remaining OR gates are resolved. The technique has been found to be appropriate as an alternative to Monte Carlo simulation methods when rare events are countered and exact results are needed. The algorithm has been developed in FORTRAN 77 on the Perq workstation as an addition to the Aston Hazop package. The Perq graphical environment enabled a friendly user interface to be created. The total package takes as its input cause and symptom equations using Lihou's form of coding and produces both drawings of fault trees and the Boolean sum of products expression into which reliability data can be substituted directly.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cold roll forming of thin-walled sections is a very useful process in the sheet metal industry. However, the conventional method for the design and manufacture of form-rolls, the special tooling used in the cold roll forming process, is a very time consuming and skill demanding exercise. This thesis describes the establishment of a stand-alone minicomputer based CAD/CAM system for assisting the design and manufacture of form-rolls. The work was undertaken in collaboration with a leading manufacturer of thin-walled sections. A package of computer programs have been developed to provide computer aids for every aspect of work in form-roll design and manufacture. The programs have been successfully implemented, as an integrated CAD/CAM software system, on the ICL PERQ minicomputer with graphics facilities. Thus, the developed CAD/CAM system is a single-user workstation, with software facilities to help the user to perform the conventional roll design activities including the design of the finished section, the flower pattern, and the form-rolls. A roll editor program can then be used to modify, if required, the computer generated roll profiles. As far as manufacturing is concerned, a special-purpose roll machining program and postprocessor can be used in conjunction to generate the NC control part-programs for the production of form-rolls by NC turning. Graphics facilities have been incorporated into the CAD/CAM software programs to display drawings interactively on the computer screen throughout all stages of execution of the CAD/CAM software. It has been found that computerisation can shorten the lead time in all activities dealing with the design and manufacture of form-rolls, and small or medium size manufacturing companies can gain benefits from the CAD/CM! technology by developing, according to its own specification, a tailor-made CAD/CAM software system on a low cost minicomputer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The trend in modal extraction algorithms is to use all the available frequency response functions data to obtain a global estimate of the natural frequencies, damping ratio and mode shapes. Improvements in transducer and signal processing technology allow the simultaneous measurement of many hundreds of channels of response data. The quantity of data available and the complexity of the extraction algorithms make considerable demands on the available computer power and require a powerful computer or dedicated workstation to perform satisfactorily. An alternative to waiting for faster sequential processors is to implement the algorithm in parallel, for example on a network of Transputers. Parallel architectures are a cost effective means of increasing computational power, and a larger number of response channels would simply require more processors. This thesis considers how two typical modal extraction algorithms, the Rational Fraction Polynomial method and the Ibrahim Time Domain method, may be implemented on a network of transputers. The Rational Fraction Polynomial Method is a well known and robust frequency domain 'curve fitting' algorithm. The Ibrahim Time Domain method is an efficient algorithm that 'curve fits' in the time domain. This thesis reviews the algorithms, considers the problems involved in a parallel implementation, and shows how they were implemented on a real Transputer network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The aim was to evaluate the validity and repeatability of the auto-refraction function of the Nidek OPD-Scan III (Nidek Technologies, Gamagori, Japan) compared with non-cycloplegic subjective refraction. The Nidek OPD-Scan III is a new aberrometer/corneal topographer workstation based on the skiascopy principle. It combines a wavefront aberrometer, topographer, autorefractor, auto keratometer and pupillometer/pupillographer. Methods: Objective refraction results obtained using the Nidek OPD-Scan III were compared with non-cycloplegic subjective refraction for 108 eyes of 54 participants (29 female) with a mean age of 23.7±9.5 years. Intra-session and inter-session variability were assessed on 14 subjects (28 eyes). Results: The Nidek OPD-Scan III gave slightly more negative readings than results obtained by subjective refraction (Nidek mean difference -0.19±0.36 DS, p<0.01 for sphere; -0.19±0.35 DS, p<0.01 for mean spherical equivalent; -0.002±0.23 DC, p=0.91 for cylinder; -0.06±0.38 DC, p=0.30 for J0 and -0.36±0.31 DC for J45, p=0.29). Auto-refractor results for 74 per cent of spherical readings and 60 per cent of cylindrical powers were within±0.25 of subjective refraction. There was high intra-session and inter-session repeatability for all parameters; 90 per cent of inter-session repeatability results were within 0.25 D. Conclusion: The Nidek OPD-Scan III gives valid and repeatable measures of objective refraction when compared with non-cycloplegic subjective refraction. © 2013 The Authors. Clinical and Experimental Optometry © 2013 Optometrists Association Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three-Dimensional (3-D) imaging is vital in computer-assisted surgical planning including minimal invasive surgery, targeted drug delivery, and tumor resection. Selective Internal Radiation Therapy (SIRT) is a liver directed radiation therapy for the treatment of liver cancer. Accurate calculation of anatomical liver and tumor volumes are essential for the determination of the tumor to normal liver ratio and for the calculation of the dose of Y-90 microspheres that will result in high concentration of the radiation in the tumor region as compared to nearby healthy tissue. Present manual techniques for segmentation of the liver from Computed Tomography (CT) tend to be tedious and greatly dependent on the skill of the technician/doctor performing the task. ^ This dissertation presents the development and implementation of a fully integrated algorithm for 3-D liver and tumor segmentation from tri-phase CT that yield highly accurate estimations of the respective volumes of the liver and tumor(s). The algorithm as designed requires minimal human intervention without compromising the accuracy of the segmentation results. Embedded within this algorithm is an effective method for extracting blood vessels that feed the tumor(s) in order to plan effectively the appropriate treatment. ^ Segmentation of the liver led to an accuracy in excess of 95% in estimating liver volumes in 20 datasets in comparison to the manual gold standard volumes. In a similar comparison, tumor segmentation exhibited an accuracy of 86% in estimating tumor(s) volume(s). Qualitative results of the blood vessel segmentation algorithm demonstrated the effectiveness of the algorithm in extracting and rendering the vasculature structure of the liver. Results of the parallel computing process, using a single workstation, showed a 78% gain. Also, statistical analysis carried out to determine if the manual initialization has any impact on the accuracy showed user initialization independence in the results. ^ The dissertation thus provides a complete 3-D solution towards liver cancer treatment planning with the opportunity to extract, visualize and quantify the needed statistics for liver cancer treatment. Since SIRT requires highly accurate calculation of the liver and tumor volumes, this new method provides an effective and computationally efficient process required of such challenging clinical requirements.^