994 resultados para writing size zero
Resumo:
Mémoire de recherche et création, incluant une partie essai et un texte de création.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.
Resumo:
We investigate whether the two 2 zero cost portfolios, SMB and HML, have the ability to predict economic growth for markets investigated in this paper. Our findings show that there are only a limited number of cases when the coefficients are positive and significance is achieved in an even more limited number of cases. Our results are in stark contrast to Liew and Vassalou (2000) who find coefficients to be generally positive and of a similar magnitude. We go a step further and also employ the methodology of Lakonishok, Shleifer and Vishny (1994) and once again fail to support the risk-based hypothesis of Liew and Vassalou (2000). In sum, we argue that search for a robust economic explanation for firm size and book-to-market equity effects needs sustained effort as these two zero cost portfolios do not represent economically relevant risk.
Resumo:
Purpose – This paper aims to present a novel rapid prototyping (RP) fabrication methods and preliminary characterization for chitosan scaffolds. Design – A desktop rapid prototyping robot dispensing (RPBOD) system has been developed to fabricate scaffolds for tissue engineering (TE) applications. The system is a computer-controlled four-axis machine with a multiple-dispenser head. Neutralization of the acetic acid by the sodium hydroxide results in a precipitate to form a gel-like chitosan strand. The scaffold properties were characterized by scanning electron microscopy, porosity calculation and compression test. An example of fabrication of a freeform hydrogel scaffold is demonstrated. The required geometric data for the freeform scaffold were obtained from CT-scan images and the dispensing path control data were converted form its volume model. The applications of the scaffolds are discussed based on its potential for TE. Findings – It is shown that the RPBOD system can be interfaced with imaging techniques and computational modeling to produce scaffolds which can be customized in overall size and shape allowing tissue-engineered grafts to be tailored to specific applications or even for individual patients. Research limitations/implications – Important challenges for further research are the incorporation of growth factors, as well as cell seeding into the 3D dispensing plotting materials. Improvements regarding the mechanical properties of the scaffolds are also necessary. Originality/value – One of the important aspects of TE is the design scaffolds. For customized TE, it is essential to be able to fabricate 3D scaffolds of various geometric shapes, in order to repair tissue defects. RP or solid free-form fabrication techniques hold great promise for designing 3D customized scaffolds; yet traditional cell-seeding techniques may not provide enough cell mass for larger constructs. This paper presents a novel attempt to fabricate 3D scaffolds, using hydrogels which in the future can be combined with cells.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Flexible tubular structures fabricated from solution electrospun fibers are finding increasing use in tissue engineering applications. However it is difficult to control the deposition of fibers due to the chaotic nature of the solution electrospinning jet. By using non-conductive polymer melts instead of polymer solutions the path and collection of the fiber becomes predictable. In this work we demonstrate the melt electrospinning of polycaprolactone in a direct writing mode onto a rotating cylinder. This allows the design and fabrication of tubes using 20 μm diameter fibers with controllable micropatterns and mechanical properties. A key design parameter is the fiber winding angle, where it allows control over scaffold pore morphology (e.g. size, shape, number and porosity). Furthermore, the establishment of a finite element model as a predictive design tool is validated against mechanical testing results of melt electrospun tubes to show that a lesser winding angle provides improved mechanical response to uniaxial tension and compression. In addition, we show that melt electrospun tubes support the growth of three different cell types in vitro and are therefore promising scaffolds for tissue engineering applications.
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc...
Resumo:
This article describes the first steps toward comprehensive characterization of molecular transport within scaffolds for tissue engineering. The scaffolds were fabricated using a novel melt electrospinning technique capable of constructing 3D lattices of layered polymer fibers with well - defined internal microarchitectures. The general morphology and structure order was then determined using T 2 - weighted magnetic resonance imaging and X - ray microcomputed tomography. Diffusion tensor microimaging was used to measure the time - dependent diffusivity and diffusion anisotropy within the scaffolds. The measured diffusion tensors were anisotropic and consistent with the cross - hatched geometry of the scaffolds: diffusion was least restricted in the direction perpendicular to the fiber layers. The results demonstrate that the cross - hatched scaffold structure preferentially promotes molecular transport vertically through the layers ( z - axis), with more restricted diffusion in the directions of the fiber layers ( x – y plane). Diffusivity in the x – y plane was observed to be invariant to the fiber thickness. The characteristic pore size of the fiber scaffolds can be probed by sampling the diffusion tensor at multiple diffusion times. Prospective application of diffusion tensor imaging for the real - time monitoring of tissue maturation and nutrient transport pathways within tissue engineering scaffolds is discussed.
Resumo:
This chapter focuses on the physicality of the iPad as an object, and how that physicality affects the interactions children have with the device generally, and the apps specifically. Thinking about the physicality of the iPad is important because the materials, size, weight and appearance make the iPad quite unlike most other toys and equipment in the kindergarten space. Most strikingly, this physicality does not ‘represent’ the virtual vast dimensions of the iPad brought about through the diverse functions and contents of the apps contained in it. While the iPad is small enough and functional enough to be easily handled and operated even by young children, it is capable of performing highly complex, highly technological tasks that take it beyond its diminutive dimensions. This virtual-actual contrast is interesting to consider in relation to the other resources more commonly found in a kindergarten space. While objects such as toys, bricks, building materials often do prompt the child to imagine and invent beyond the physical boundaries of the toy, they not have the same types of virtual-actual contrasts of a digital device such as the iPad. How then, might children be drawn to the iPad because of its physical, technological and virtual difference? Particularly, how might this virtual-actual difference impact on the physical skills associated with writing and drawing: skills usually learnt through the use of a pencil and paper? While the research project did not set out to compare how digital and paper-based resources affect writing and drawing skills there was great interest to see how young children negotiated drawing and writing on the shiny glass surface of the iPad.
Resumo:
The usual postmodern suspicions about diligently deciphering authorial intent or stridently seeking fixed meaning/s and/or binary distinctions in an artistic work aside, this self-indulgent essay pushes the boundaries regarding normative academic research, for it focusses on my own (minimally celebrated) published creative writing’s status as a literary innovation. Dedicated to illuminating some of the less common denominators at play in Australian horror, my paper recalls the creative writing process involved when I set upon the (arrogant?) goal of creating a new genre of creative writing: that of the ‘Aboriginal Fantastic’. I compare my work to the literary output of a small but significant group (2.5% of the population), of which I am a member: Aboriginal Australians. I narrow my focus even further by examining that creative writing known as Aboriginal horror. And I reduce the sample size of my study to an exceptionally small number by restricting my view to one type of Aboriginal horror literature only: the Aboriginal vampire novel, a genre to which I have contributed professionally with the 2011 paperback and 2012 e-book publication of That Blackfella Bloodsucka Dance! However, as this paper hopefully demonstrates, and despite what may be interpreted by some cynical commentators as the faux sincerity of my taxonomic fervour, Aboriginal horror is a genre noteworthy for its instability and worthy of further academic interrogation. (first paragraph)