185 resultados para Laypeople orders


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditional ceramic separation membranes, which are fabricated by applying colloidal suspensions of metal hydroxides to porous supports, tend to suffer from pinholes and cracks that seriously affect their quality. Other intrinsic problems for these membranes include dramatic losses of flux when the pore sizes are reduced to enhance selectivity and dead-end pores that make no contribution to filtration. In this work, we propose a new strategy for addressing these problems by constructing a hierarchically structured separation layer on a porous substrate using large titanate nanofibers and smaller boehmite nanofibers. The nanofibers are able to divide large voids into smaller ones without forming dead-end pores and with the minimum reduction of the total void volume. The separation layer of nanofibers has a porosity of over 70% of its volume, whereas the separation layer in conventional ceramic membranes has a porosity below 36% and inevitably includes dead-end pores that make no contribution to the flux. This radical change in membrane texture greatly enhances membrane performance. The resulting membranes were able to filter out 95.3% of 60-nm particles from a 0.01 wt % latex while maintaining a relatively high flux of between 800 and 1000 L/m2·h, under a low driving pressure (20 kPa). Such flow rates are orders of magnitude greater than those of conventional membranes with equal selectivity. Moreover, the flux was stable at approximately 800 L/m2·h with a selectivity of more than 95%, even after six repeated runs of filtration and calcination. Use of different supports, either porous glass or porous alumina, had no substantial effect on the performance of the membranes; thus, it is possible to construct the membranes from a variety of supports without compromising functionality. The Darcy equation satisfactorily describes the correlation between the filtration flux and the structural parameters of the new membranes. The assembly of nanofiber meshes to combine high flux with excellent selectivity is an exciting new direction in membrane fabrication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The title of this book, Hard Lesson: Reflections on Crime control in Late Modernity, contains a number of clues about its general theoretical direction. It is a book concerned, fist and foremost, with the vagaries of crime control in western neo-liberal and English speaking countries. More specifically, Hard Lessons draws attention to a number of examples in which discrete populations – those who have in one way or another offended against the criminal law - have become the subjects of various forms of stare intervention, regulation and control. We are concerned most of all with the ways in which recent criminal justice policies and practices have resulted in what are variously described as unintended consequences, unforeseen outcomes, unanticipated results, counter-productive effects or negative side effects. At their simplest, such terms refer to the apparent gulf between intention and outcome; they often form the basis for considerable amount of policy reappraisal, soul searching and even nihilistic despair among the mamandirns of crime control. Unintended consequences can, of course, be both positive and negative. Occasionally, crime control measures may result in beneficial outcomes, such as the use of DNA to acquit wrongly convicted prisoners. Generally, however, unforeseen effects tend to be negative and even entirely counterproductive, and/or directly opposite to what were originally intended. All this, of course, presupposes some sort of rational, well meaning and transparent policy making process so beloved by liberal social policy theorists. Yet, as Judith Bessant points out in her chapter, this view of policy formulation tends to obscure the often covert, regulatory and downright malevolent intentions contained in many government policies and practices. Indeed, history is replete with examples of governments seeking to mask their real aims from a prying public eye. Denials and various sorts of ‘techniques of neutralisation’ serve to cloak the real or ‘underlying’ aims of the powerful (Cohen 2000). The latest crop of ‘spin doctors’ and ‘official spokespersons’ has ensured that the process of governmental obfuscation, distortion and concealment remains deeply embedded in neo-liberal forms of governance. There is little new or surprising in this; nor should we be shocked when things ‘go wrong’ in the domain of crime control since many unintended consequences are, more often than not, quite predictable. Prison riots, high rates of recidivism and breaches of supervision orders, expansion rather than contraction of control systems, laws that create the opposite of what was intended – all these are normative features of western crime control. Indeed, without the deep fault lines running between policy and outcome it would be hard to imagine what many policy makers, administrators and practitioners would do: their day to day work practices and (and incomes) are directly dependent upon emergent ‘service delivery’ problems. Despite recurrent howls of official anguish and occasional despondency it is apparent that those involved in the propping up the apparatus of crime control have a vested interest in ensuring that polices and practices remain in an enduring state of review and reform.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Human-specific Bacteroides HF183 (HS-HF183), human-specific Enterococci faecium esp (HS-esp), human-specific adenoviruses (HS-AVs) and human-specific polyomaviruses (HS-PVs) assays were evaluated in freshwater, seawater and distilled water to detect fresh sewage. The sewage spiked water samples were also tested for the concentrations of traditional fecal indicators (i.e., Escherichia coli, enterococci and Clostridium perfringens) and enteric viruses such as enteroviruses (EVs), sapoviruses (SVs), and torquetenoviruses (TVs). The overall host-specificity of the HS-HF183 marker to differentiate between humans and other animals was 98%. However, the HS-esp, HS-AVs and HS-PVs showed 100% hostspecificity. All the human-specific markers showed >97% sensitivity to detect human fecal pollution. E. coli, enterococci and, C. perfringens were detected up to dilutions of sewage 10_5, 10_4 and 10_3 respectively.HS-esp, HS-AVs, HS-PVs, SVs and TVs were detected up to dilution of sewage 10_4 whilst EVs were detected up to dilution 10_5. The ability of the HS-HF183 marker to detect freshsewagewas3–4 orders ofmagnitude higher than that of the HS-esp and viral markers. The ability to detect fresh sewage in freshwater, seawater and distilled water matrices was similar for human-specific bacterial and viral marker. Based on our data, it appears that human-specific molecular markers are sensitive measures of fresh sewage pollution, and the HS-HF183 marker appears to be the most sensitive among these markers in terms of detecting fresh sewage. However, the presence of the HS-HF183 marker in environmental waters may not necessarily indicate the presence of enteric viruses due to their high abundance in sewage compared to enteric viruses. More research is required on the persistency of these markers in environmental water samples in relation to traditional fecal indicators and enteric pathogens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nonlinearity, uncertainty and subjectivity are the three predominant characteristics of contractors prequalification which cause the process more of an art than a scientific evaluation. A fuzzy neural network (FNN) model, amalgamating both the fuzzy set and neural network theories, has been developed aiming to improve the objectiveness of contractor prequalification. Through the FNN theory, the fuzzy rules as used by the prequalifiers can be identified and the corresponding membership functions can be transformed. Eighty-five cases with detailed decision criteria and rules for prequalifying Hong Kong civil engineering contractors were collected. These cases were used for training (calibrating) and testing the FNN model. The performance of the FNN model was compared with the original results produced by the prequalifiers and those generated by the general feedforward neural network (GFNN, i.e. a crisp neural network) approach. Contractor’s ranking orders, the model efficiency (R2) and the mean absolute percentage error (MAPE) were examined during the testing phase. These results indicate the applicability of the neural network approach for contractor prequalification and the benefits of the FNN model over the GFNN model. The FNN is a practical approach for modelling contractor prequalification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigates the everyday practices of young children acting in their social worlds within the context of the school playground. It employs an ethnographic ethnomethodological approach using conversation analysis. In the context of child participation rights advanced by the United Nations Convention on the Rights of the Child (UNCRC) and childhood studies, the study considers children’s social worlds and their participation agendas. The participants of the study were a group of young children in a preparatory year setting in a Queensland school. These children, aged 4 to 6 years, were videorecorded as they participated in their day-to-day activities in the classroom and in the playground. Data collection took place over a period of three months, with a total of 26 hours of video data. Episodes of the video-recordings were shown to small groups of children and to the teacher to stimulate conversations about what they saw on the video. The conversations were audio-recorded. This method acknowledged the child’s standpoint and positioned children as active participants in accounting for their relationships with others. These accounts are discussed as interactionally built comments on past joint experiences and provided a starting place for analysis of the video-recorded interaction. Four data chapters are presented in this thesis. Each data chapter investigates a different topic of interaction. The topics include how children use “telling” as a tactical tool in the management of interactional trouble, how children use their “ideas” as possessables to gain ownership of a game and the interactional matters that follow, how children account for interactional matters and bid for ownership of “whose idea” for the game and finally, how a small group of girls orientated to a particular code of conduct when accounting for their actions in a pretend game of “school”. Four key themes emerged from the analysis. The first theme addresses two arenas of action operating in the social world of children, pretend and real: the “pretend”, as a player in a pretend game, and the “real”, as a classroom member. These two arenas are intertwined. Through inferences to explicit and implicit “codes of conduct”, moral obligations are invoked as children attempt to socially exclude one another, build alliances and enforce their own social positions. The second theme is the notion of shared history. This theme addresses the history that the children reconstructed, and acts as a thread that weaves through their interactions, with implications for present and future relationships. The third theme is around ownership. In a shared context, such as the playground, ownership is a highly contested issue. Children draw on resources such as rules, their ideas as possessables, and codes of behaviour as devices to construct particular social and moral orders around owners of the game. These themes have consequences for children’s participation in a social group. The fourth theme, methodological in nature, shows how the researcher was viewed as an outsider and novice and was used as a resource by the children. This theme is used to inform adult-child relationships. The study was situated within an interest in participation rights for children and perspectives of children as competent beings. Asking children to account for their participation in playground activities situates children as analysers of their own social worlds and offers adults further information for understanding how children themselves construct their social interactions. While reporting on the experiences of one group of children, this study opens up theoretical questions about children’s social orders and these influences on their everyday practices. This thesis uncovers how children both participate in, and shape, their everyday social worlds through talk and interaction. It investigates the consequences that taken-for-granted activities of “playing the game” have for their social participation in the wider culture of the classroom. Consideration of this significance may assist adults to better understand and appreciate the social worlds of young children in the school playground.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ceramic membranes are of particular interest in many industrial processes due to their ability to function under extreme conditions while maintaining their chemical and thermal stability. Major structural deficiencies under conventional fabrication approach are pin-holes and cracks, and the dramatic losses of flux when pore sizes are reduced to enhance selectivity. We overcome these structural deficiencies by constructing hierarchically structured separation layer on a porous substrate using larger titanate nanofibres and smaller boehmite nanofibres. This yields a radical change in membrane texture. The differences in the porous supports have no substantial influences on the texture of resulting membranes. The membranes with top layer of nanofibres coated on different porous supports by spin-coating method have similar size of the filtration pores, which is in a range of 10–100 nm. These membranes are able to effectively filter out species larger than 60 nm at flow rates orders of magnitude greater than conventional membranes. The retention can attain more than 95%, while maintaining a high flux rate about 900 L m-2 h. The calcination after spin-coating creates solid linkages between the fibres and between fibres and substrate, in addition to convert boehmite into -alumina nanofibres. This reveals a new direction in membrane fabrication.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is a study of naturally occurring radioactive materials (NORM) activity concentration, gamma dose rate and radon (222Rn) exhalation from the waste streams of large-scale onshore petroleum operations. Types of activities covered included; sludge recovery from separation tanks, sludge farming, NORM storage, scaling in oil tubulars, scaling in gas production and sedimentation in produced water evaporation ponds. Field work was conducted in the arid desert terrain of an operational oil exploration and production region in the Sultanate of Oman. The main radionuclides found were 226Ra and 210Pb (238U - series), 228Ra and 228Th (232Th - series), and 227Ac (235U - series), along with 40K. All activity concentrations were higher than the ambient soil level and varied over several orders of magnitude. The range of gamma dose rates at a 1 m height above ground for the farm treated sludge had a range of 0.06 0.43 µSv h 1, and an average close to the ambient soil mean of 0.086 ± 0.014 µSv h 1, whereas the untreated sludge gamma dose rates had a range of 0.07 1.78 µSv h 1, and a mean of 0.456 ± 0.303 µSv h 1. The geometric mean of ambient soil 222Rn exhalation rate for area surrounding the sludge was mBq m 2 s 1. Radon exhalation rates reported in oil waste products were all higher than the ambient soil value and varied over three orders of magnitude. This study resulted in some unique findings including: (i) detection of radiotoxic 227Ac in the oil scales and sludge, (ii) need of a new empirical relation between petroleum sludge activity concentrations and gamma dose rates, and (iii) assessment of exhalation of 222Rn from oil sludge. Additionally the study investigated a method to determine oil scale and sludge age by the use of inherent behaviour of radionuclides as 228Ra:226Ra and 228Th:228Ra activity ratios.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study reported here, constitutes a full review of the major geological events that have influenced the morphological development of the southeast Queensland region. Most importantly, it provides evidence that the region’s physiography continues to be geologically ‘active’ and although earthquakes are presently few and of low magnitude, many past events and tectonic regimes continue to be strongly influential over drainage, morphology and topography. Southeast Queensland is typified by highland terrain of metasedimentary and igneous rocks that are parallel and close to younger, lowland coastal terrain. The region is currently situated in a passive margin tectonic setting that is now under compressive stress, although in the past, the region was subject to alternating extensional and compressive regimes. As part of the investigation, the effects of many past geological events upon landscape morphology have been assessed at multiple scales using features such as the location and orientation of drainage channels, topography, faults, fractures, scarps, cleavage, volcanic centres and deposits, and recent earthquake activity. A number of hypotheses for local geological evolution are proposed and discussed. This study has also utilised a geographic information system (GIS) approach that successfully amalgamates the various types and scales of datasets used. A new method of stream ordination has been developed and is used to compare the orientation of channels of similar orders with rock fabric, in a topologically controlled approach that other ordering systems are unable to achieve. Stream pattern analysis has been performed and the results provide evidence that many drainage systems in southeast Queensland are controlled by known geological structures and by past geological events. The results conclude that drainage at a fine scale is controlled by cleavage, joints and faults, and at a broader scale, large river valleys, such as those of the Brisbane River and North Pine River, closely follow the location of faults. These rivers appear to have become entrenched by differential weathering along these planes of weakness. Significantly, stream pattern analysis has also identified some ‘anomalous’ drainage that suggests the orientations of these watercourses are geologically controlled, but by unknown causes. To the north of Brisbane, a ‘coastal drainage divide’ has been recognized and is described here. The divide crosses several lithological units of different age, continues parallel to the coast and prevents drainage from the highlands flowing directly to the coast for its entire length. Diversion of low order streams away from the divide may be evidence that a more recent process may be the driving force. Although there is no conclusive evidence for this at present, it is postulated that the divide may have been generated by uplift or doming associated with mid-Cenozoic volcanism or a blind thrust at depth. Also north of Brisbane, on the D’Aguilar Range, an elevated valley (the ‘Kilcoy Gap’) has been identified that may have once drained towards the coast and now displays reversed drainage that may have resulted from uplift along the coastal drainage divide and of the D’Aguilar blocks. An assessment of the distribution and intensity of recent earthquakes in the region indicates that activity may be associated with ancient faults. However, recent movement on these faults during these events would have been unlikely, given that earthquakes in the region are characteristically of low magnitude. There is, however, evidence that compressive stress is building and being released periodically and ancient faults may be a likely place for this stress to be released. The relationship between ancient fault systems and the Tweed Shield Volcano has also been discussed and it is suggested here that the volcanic activity was associated with renewed faulting on the Great Moreton Fault System during the Cenozoic. The geomorphology and drainage patterns of southeast Queensland have been compared with expected morphological characteristics found at passive and other tectonic settings, both in Australia and globally. Of note are the comparisons with the East Brazilian Highlands, the Gulf of Mexico and the Blue Ridge Escarpment, for example. In conclusion, the results of the study clearly show that, although the region is described as a passive margin, its complex, past geological history and present compressive stress regime provide a more intricate and varied landscape than would be expected along typical passive continental margins. The literature review provides background to the subject and discusses previous work and methods, whilst the findings are presented in three peer-reviewed, published papers. The methods, hypotheses, suggestions and evidence are discussed at length in the final chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

National culture is deeply rooted in values, which are learned and acquired when we are young (2007, p. 6), and „embedded deeply in everyday life. (Newman & Nollen, 1996, p. 754). Values have helped to shape us into who we are today. In other words, as we grow older, the cultural values we have learned and adapted to will mould our daily practices. This is reflected in our actions, behaviours, and the ways in which we communicate. Based on the previous assertion, it can be suggested that national culture may also influence organisational culture, as our „behaviour at work is a continuation of behaviour learned earlier. (Hofstede, 1991, p. 4). Cultural influence in an organisation could be evidenced by looking at communication practices: how employees interact with one another as they communicate in their daily practices. Earlier studies in organisational communication see communication as the heart of an organisation in which it serves, and as „the essence of organised activity and the basic process out of which all other functions derive. (Bavelas and Barret, cited in Redding, 1985, p. 7). Hence, understanding how culture influences communication will help with understanding organisational behaviour. This study was conducted to look at how culture values, which are referred to as culture dimensions in this thesis, influenced communication practices in an organisation that was going through a change process. A single case study was held in a Malaysian organisation, to investigate how Malaysian culture dimensions of respect, collectivism, and harmony were evidenced in the communication practices. Data was collected from twelve semi-structured interviews and five observation sessions. Guided by six attributes identified in the literature, (1) acknowledging seniority, knowledge and experience, 2) saving face, 3) showing loyalty to organisation and leaders, 4) demonstrating cohesiveness among members, 5) prioritising group interests over personal interests, and 6) avoiding confrontations of Malaysian culture dimensions, this study found eighteen communication practices performed by employees of the organisation. This research contributes to the previous cultural work, especially in the Malaysian context, in which evidence of Malaysian culture dimensions of respect, collectivism, and harmony were displayed in communication practices: 1) acknowledging the status quo, 2) obeying orders and directions, 3) name dropping, 4) keeping silent, 5) avoiding questioning, 6) having separate conversations, 7) adding, not criticising, 8) sugar coating, 9) instilling a sense of belonging, 10) taking sides, 11) cooperating, 12) sacrificing personal interest, 13) protecting identity, 14) negotiating, 15) saying „yes. instead of „no., 16) giving politically correct answers, 17) apologising, and 18) tolerating errors. Insights from this finding will help us to understand the organisational challenges that rely on communication, such as during organisational change. Therefore, data findings will be relevant to practitioners to understand the impact of culture on communication practices across countries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a time-space fractional diffusion equation of distributed order (TSFDEDO). The TSFDEDO is obtained from the standard advection-dispersion equation by replacing the first-order time derivative by the Caputo fractional derivative of order α∈(0,1], the first-order and second-order space derivatives by the Riesz fractional derivatives of orders β 1∈(0,1) and β 2∈(1,2], respectively. We derive the fundamental solution for the TSFDEDO with an initial condition (TSFDEDO-IC). The fundamental solution can be interpreted as a spatial probability density function evolving in time. We also investigate a discrete random walk model based on an explicit finite difference approximation for the TSFDEDO-IC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: All currently considered parametric models used for decomposing videokeratoscopy height data are viewercentered and hence describe what the operator sees rather than what the surface is. The purpose of this study was to ascertain the applicability of an object-centered representation to modeling of corneal surfaces. Methods: A three-dimensional surface decomposition into a series of spherical harmonics is considered and compared with the traditional Zernike polynomial expansion for a range of videokeratoscopic height data. Results: Spherical harmonic decomposition led to significantly better fits to corneal surfaces (in terms of the root mean square error values) than the corresponding Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters, and model orders. Conclusions: Spherical harmonic decomposition is a viable alternative to Zernike polynomial decomposition. It achieves better fits to videokeratoscopic height data and has the advantage of an object-centered representation that could be particularly suited to the analysis of multiple corneal measurements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose: To ascertain the effectiveness of object-centered three-dimensional representations for the modeling of corneal surfaces. Methods: Three-dimensional (3D) surface decomposition into series of basis functions including: (i) spherical harmonics, (ii) hemispherical harmonics, and (iii) 3D Zernike polynomials were considered and compared to the traditional viewer-centered representation of two-dimensional (2D) Zernike polynomial expansion for a range of retrospective videokeratoscopic height data from three clinical groups. The data were collected using the Medmont E300 videokeratoscope. The groups included 10 normal corneas with corneal astigmatism less than −0.75 D, 10 astigmatic corneas with corneal astigmatism between −1.07 D and 3.34 D (Mean = −1.83 D, SD = ±0.75 D), and 10 keratoconic corneas. Only data from the right eyes of the subjects were considered. Results: All object-centered decompositions led to significantly better fits to corneal surfaces (in terms of the RMS error values) than the corresponding 2D Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters (2, 4, 6, and 8 mm), and model orders (4th to 10th radial orders) The best results (smallest RMS fit error) were obtained with spherical harmonics decomposition which lead to about 22% reduction in the RMS fit error, as compared to the traditional 2D Zernike polynomials. Hemispherical harmonics and the 3D Zernike polynomials reduced the RMS fit error by about 15% and 12%, respectively. Larger reduction in RMS fit error was achieved for smaller corneral diameters and lower order fits. Conclusions: Object-centered 3D decompositions provide viable alternatives to traditional viewer-centered 2D Zernike polynomial expansion of a corneal surface. They achieve better fits to videokeratoscopic height data and could be particularly suited to the analysis of multiple corneal measurements, where there can be slight variations in the position of the cornea from one map acquisition to the next.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A configurable process model describes a family of similar process models in a given domain. Such a model can be configured to obtain a specific process model that is subsequently used to handle individual cases, for instance, to process customer orders. Process configuration is notoriously difficult as there may be all kinds of interdependencies between configuration decisions.} In fact, an incorrect configuration may lead to behavioral issues such as deadlocks and livelocks. To address this problem, we present a novel verification approach inspired by the ``operating guidelines'' used for partner synthesis. We view the configuration process as an external service, and compute a characterization of all such services which meet particular requirements using the notion of configuration guideline. As a result, we can characterize all feasible configurations (i.\,e., configurations without behavioral problems) at design time, instead of repeatedly checking each individual configuration while configuring a process model.