955 resultados para Software CAD 3D para vestuário
Resumo:
This research explored how small and medium enterprises can achieve success with software as a service (SaaS) applications from cloud. Based upon an empirical investigation of six growth oriented and early technology adopting small and medium enterprises, this study proposes a SaaS for small and medium enterprise success model with two approaches: one for basic and one for advanced benefits. The basic model explains the effective use of SaaS for achieving informational and transactional benefits. The advanced model explains the enhanced use of software as a service for achieving strategic and transformational benefits. Both models explicate the information systems capabilities and organizational complementarities needed for achieving success with SaaS.
Resumo:
The results of a high-resolution ambient STM study of ‘sulflower’ (octathio[8]circulene) and ‘selenosulflower’ (sym-tetraselena-tetrathio[8]circulene) molecules, immobilized in a hydrogen-bonded matrix of trimesic acid (TMA) at the solid–liquid interface, are compared with the STM and X-ray structure of separate host and guest 2D and 3D crystals, respectively.
Resumo:
Bug fixing is a highly cooperative work activity where developers, testers, product managers and other stake-holders collaborate using a bug tracking system. In the context of Global Software Development (GSD), where software development is distributed across different geographical locations, we focus on understanding the role of bug trackers in supporting software bug fixing activities. We carried out a small-scale ethnographic fieldwork in a software product team distributed between Finland and India at a multinational engineering company. Using semi-structured interviews and in-situ observations of 16 bug cases, we show that the bug tracker 1) supported information needs of different stake holder, 2) established common-ground, and 3) reinforced issues related to ownership, performance and power. Consequently, we provide implications for design around these findings.
Resumo:
Background The Circle of Willis (CoW) is the most important collateral pathway of the cerebral artery. The present study aims to investigate the collateral capacity of CoW with anatomical variation when unilateral internalcarotid artery (ICA) is occluded. Methods Basing on MRI data, we have reconstructed eight 3D models with variations in the posterior circulation of the CoW and set four different degrees of stenosis in the right ICA, namely 24%, 43%, 64% and 79%, respectively. Finally, a total of 40 models are performed with computational fluid dynamics simulations. All of the simulations share the same boundary condition with static pressure and the volume flow rate (VFR) are obtained to evaluate their collateral capacity. Results As for the middle cerebral artery (MCA) and the anterior cerebral artery (ACA), the transitional-type model possesses the best collateral capacity. But for the posterior cerebral artery (PCA), unilateral stenosis of ICA has the weakest influence on the unilateral posterior communicating artery (PCoA) absent model. We also find that the full fetal-type posterior circle of Willis is an utmost dangerous variation which must be paid more attention. Conclusion The results demonstrate that different models have different collateral capacities in coping stenosis of unilateral ICA and these differences can be reflected by different outlets. The study could be used as a reference for neurosurgeon in choosing the best treatment strategy.
Resumo:
A three-dimensional (3D) mathematical model of tumour growth at the avascular phase and vessel remodelling in host tissues is proposed with emphasis on the study of the interactions of tumour growth and hypoxic micro-environment in host tissues. The hybrid based model includes the continuum part, such as the distributions of oxygen and vascular endothelial growth factors (VEGFs), and the discrete part of tumour cells (TCs) and blood vessel networks. The simulation shows the dynamic process of avascular tumour growth from a few initial cells to an equilibrium state with varied vessel networks. After a phase of rapidly increasing numbers of the TCs, more and more host vessels collapse due to the stress caused by the growing tumour. In addition, the consumption of oxygen expands with the enlarged tumour region. The study also discusses the effects of certain factors on tumour growth, including the density and configuration of preexisting vessel networks and the blood oxygen content. The model enables us to examine the relationship between early tumour growth and hypoxic micro-environment in host tissues, which can be useful for further applications, such as tumour metastasis and the initialization of tumour angiogenesis.
Resumo:
The hydrothermal reaction of Ln(NO3)(3), Ni(NO3)(2), NaN3, and isonicotinic acid (L) yielded two novel 3-D coordination frameworks (1 and 2) of general formula [Ni(2)Ln(L)(5)(N-3)(2)(H2O)(3)] center dot 2H(2)O (Ln = Pr(III) for 1 and Nd(III) for 2), containing Ni-Pr or Ni-Nd hybrid extended three-dimensional networks containing both azido and carboxylate as co-ligands. Both the compounds are found to be isostructural and crystallize in monoclinic system having P2(1)/n space group. Here the lanthanide ions are found to be nonacoordinated. Both bidentate and monodentate modes of binding of the carboxylate with the lanthanides have been observed in the above complexes. Variable temperature magnetic studies of the above two complexes have been investigated in the temperature range 2-300 K which showed dominant antiferromagnetic interaction in both the cases and these experimental results are analyzed with the theoretical models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper presents on overview of the issues in precisely defining, specifying and evaluating the dependability of software, particularly in the context of computer controlled process systems. Dependability is intended to be a generic term embodying various quality factors and is useful for both software and hardware. While the developments in quality assurance and reliability theories have proceeded mostly in independent directions for hardware and software systems, we present here the case for developing a unified framework of dependability—a facet of operational effectiveness of modern technological systems, and develop a hierarchical systems model helpful in clarifying this view. In the second half of the paper, we survey the models and methods available for measuring and improving software reliability. The nature of software “bugs”, the failure history of the software system in the various phases of its lifecycle, the reliability growth in the development phase, estimation of the number of errors remaining in the operational phase, and the complexity of the debugging process have all been considered to varying degrees of detail. We also discuss the notion of software fault-tolerance, methods of achieving the same, and the status of other measures of software dependability such as maintainability, availability and safety.
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.
Resumo:
Tutkielma käsittelee suomalaisten televisiotekstittäjien ammatillisuutta, käännösprosessia ja digitaalisten tekstitysohjelmien vaikutuksia tekstitysprosessiin ammattitekstittäjien näkökulmasta. Suomen television digitalisoituminen on aiheuttanut mullistuksia myös tekstitysalalla kun tekstitettävä kuvamateriaali on ryhdytty toimittamaan käännöstoimistoille ja tekstittäjille digitaalisena. Teoriaosuudessa käsitellään käännös- ja tekstitystutkimusta sekä koulutusta Suomessa, ammattitaitoa ja ammatillisuutta sekä kääntämisen apukeinoja. Tekstittäminen esitellään erikoistuneena kääntämisen muotona. On kuitenkin myös huomioitava, että kääntäminen on yksi vaihe tekstitysprosessissa. Teoriaosuus päättyy suomalaisten televisiotekstittäjien arjen ja työkentän nykytilanteen käsittelyyn – tekstittäjät työskentelevät monenlaisilla työehdoilla ja laadun kriteerit saatetaan joutua arvioimaan uudelleen. Empiirisen osan alussa esitetään, että suomalaisia televisiotekstittäjiä on haastateltu yllättävän vähän, ja Jääskeläisen ajatuksiin nojaten mainitaan, että tekstittämisen alalla on vielä paljon tutkimatta – etenkin suomalaisesta tekstitysprosessista löytyy tutkittavaa. Tutkimuskohde on ammatikseen televisioon tekstityksiä tekevät kääntäjät. Suomalaiselle tekstitykseen erikoistuneelle käännöstoimistolle työskenteleville tekstittäjille lähetettiin alkutalvesta 2008 kyselylomake, jolla kartoitettiin sekä monivalintakysymyksillä että avoimilla kysymyksillä heidän ammatillisuuttaan, työmenetelmiään, käännös- ja tekstitysprosessiaan, ammattiylpeyttään ja -identiteettiään, ajanhallintaansa, sekä heidän käyttämäänsä digitaalista tekstitysohjelmaa. Tutkimuksessa kävi ilmi, että lähes kolmanneksella vastaajista on ammatistaan neutraali tai jopa negatiivinen käsitys. Näitä tekstittäjiä yhdistää se seikka, että kaikilla on alle 5 vuotta kokemusta alalta. Valtaosa vastanneista on kuitenkin ylpeitä siitä, että toimivat suomen kielen ammattilaisina. Tekstitysprosessi oli lomakkeessa jaettu esikatseluvaiheeseen, käännösvaiheeseen, ajastamisvaiheeseen ja korjauskatseluvaiheeseen. Tekstittäjät pyydettiin mm. arvioimaan tekstitysprosessinsa kokonaiskestoa. Kestoissa ilmeni suuria eroavaisuuksia, joista ainakin osa korreloi kokemuksen kanssa. Runsas puolet vastaajista on hankkinut digitaalisen tekstitysohjelmiston käyttöönsä ja osa ajastaa edelleen käännöstoimistossa muun muassa ohjelmiston kalleuden vuoksi. Digitaalisen ohjelmiston myötä tekstitysprosessiin ja työkäytänteisiin on tullut muutoksia, kun videonauhureista ja televisioista on siirrytty pelkän tietokoneen käyttöön. On mahdollista tehdä etätyötä kaukomailta käsin, kääntää ja ajastaa lomittain tai tehdä esiajastus ja kääntää sitten. Digitaalinen tekniikka on siis mahdollistanut tekstitysprosessin muuttumisen ja vaihtoehtoiset työmenetelmät, mutta kaikista menetelmistä ei välttämättä ole tekstittäjälle hyötyä. Perinteinen tekstitysprosessi (esikatselu, repliikkijakojen merkitseminen käsikirjoitukseen, kääntäminen ja repliikkien laadinta, korjaukset ja tarkastuskatselu) vaikuttaa edelleen tehokkaimmalta. Vaikka työkäytänteet eroavat toisistaan, kokonaiskäsitys on se, että digitalisoitumisen alkukangertelujen jälkeen tekstittäjien työskentely on tehostunut.
Resumo:
In this paper the main features of ARDBID (A Relational Database for Interactive Design) have been described. An overview of the organization of the database has been presented and a detailed description of the data definition and manipulation languages has been given. These have been implemented on a DEC 1090 system.
Resumo:
Light interception is a major factor influencing plant development and biomass production. Several methods have been proposed to determine this variable, but its calculation remains difficult in artificial environments with heterogeneous light. We propose a method that uses 3D virtual plant modelling and directional light characterisation to estimate light interception in highly heterogeneous light environments such as growth chambers and glasshouses. Intercepted light was estimated by coupling an architectural model and a light model for different genotypes of the rosette species Arabidopsis thaliana (L.) Heynh and a sunflower crop. The model was applied to plants of contrasting architectures, cultivated in isolation or in canopy, in natural or artificial environments, and under contrasting light conditions. The model gave satisfactory results when compared with observed data and enabled calculation of light interception in situations where direct measurements or classical methods were inefficient, such as young crops, isolated plants or artificial conditions. Furthermore, the model revealed that A. thaliana increased its light interception efficiency when shaded. To conclude, the method can be used to calculate intercepted light at organ, plant and plot levels, in natural and artificial environments, and should be useful in the investigation of genotype-environment interactions for plant architecture and light interception efficiency. This paper originates from a presentation at the 5th International Workshop on Functional–Structural Plant Models, Napier, New Zealand, November 2007.
Resumo:
Background: With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow Results: A laboratory information management system (LIMS) has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat) genotyping data from the legume (chickpea, groundnut and pigeonpea) and cereal (sorghum and millets) crops of importance in the semi-arid tropics. Conclusions: A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping laboratory. The application with source code is freely available for academic users and can be downloaded from http://www.icrisat.org/bt-software-d-lims.htm
Resumo:
3d and 4d core-level XPS spectra for CePd3, a mixed-valence system, have been measured. Each spectrum exhibits two sets of structures, each corresponding to one of the valence states of cerium. Thus the usefulness of XPS, which has so far not been used extensively to investigate the mixed-valence cerium systems, is pointed out.
Resumo:
Models are abstractions of reality that have predetermined limits (often not consciously thought through) on what problem domains the models can be used to explore. These limits are determined by the range of observed data used to construct and validate the model. However, it is important to remember that operating the model beyond these limits, one of the reasons for building the model in the first place, potentially brings unwanted behaviour and thus reduces the usefulness of the model. Our experience with the Agricultural Production Systems Simulator (APSIM), a farming systems model, has led us to adapt techniques from the disciplines of modelling and software development to create a model development process. This process is simple, easy to follow, and brings a much higher level of stability to the development effort, which then delivers a much more useful model. A major part of the process relies on having a range of detailed model tests (unit, simulation, sensibility, validation) that exercise a model at various levels (sub-model, model and simulation). To underline the usefulness of testing, we examine several case studies where simulated output can be compared with simple relationships. For example, output is compared with crop water use efficiency relationships gleaned from the literature to check that the model reproduces the expected function. Similarly, another case study attempts to reproduce generalised hydrological relationships found in the literature. This paper then describes a simple model development process (using version control, automated testing and differencing tools), that will enhance the reliability and usefulness of a model.
Resumo:
The project renewed the Breedcow and Dynama software making it compatible with modern computer operating systems and platforms. Enhancements were also made to the linkages between the individual programs and their operation. The suite of programs is a critical component of the skill set required to make soundly based plans and production choices in the north Australian beef industry.