886 resultados para Scale development
Resumo:
In order to prepare younger generations to live in a world characterized by interconnectedness, developing global and international perspectives for future teachers has been recommended by the National Council for the Social Studies and the National Council for the Accreditation of Teacher Education. The purpose of this study was to investigate the effects that participation in the International Communication and Negotiation Simulation (ICONS), an Internet-based communication project has on preservice social studies teachers' global knowledge, global mindedness, and global teaching strategies. ^ The study was conducted at a public university in South Florida. A combination of quantitative and qualitative approaches was employed. Two groups of preservice social studies teachers were chosen as participants: a control group composed of 14 preservice teachers who enrolled in a global education class in the summer semester of 1998 and an experimental group that included nine preservice teachers who took the same class in the fall semester of 1998. The summer class was conducted in a traditional format, which included lectures, classroom discussions, and student presentations. The fall class incorporated a five-week Internet-based communication project. The Global Mindedness Scale (Hett, 1993) and an adapted Test of Global Knowledge (ETS, 1981) were administered upon the completion of the class. ^ Contrasting case studies were utilized to investigate the impact of participation in the ICONS on the development of preservice teachers' global pedagogy. Four preservice teachers, two selected from each group were observed and interviewed to explore how they were infusing global perspectives into social studies curriculum and instruction during a 10-week student teaching internship in the spring semester of 1999. ^ This study had three major findings. First, preservice social studies teachers from the experimental group on average scored significantly higher than those from the control group on the global knowledge test. Second, no significant difference was found between the two groups in their mean scores on the Global Mindedness Scale. Third, all four selected preservice social studies teachers were infusing global perspectives into United States and world history curriculum and instruction during their student teaching internship. Using multiple resources was the common pedagogy. The two who participated in the ICONS were more aware of using the communication feature of the Internet and the web sites that reflect more international perspectives to facilitate teaching about the world. ^ Recommendations were made for further research and for preservice studies teacher education program development. ^
Resumo:
There is a growing body of literature that provides evidence for the efficacy of positive youth development programs in general and preliminary empirical support for the efficacy of the Changing Lives Program (CLP) in particular. This dissertation sought to extend previous efforts to develop and preliminarily examine the Transformative Goal Attainment Scale (TGAS) as a measure of participant empowerment in the promotion of positive development. Consistent with recent advances in the use of qualitative research methods, this dissertation sought to further investigate the utility of Relational Data Analysis (RDA) for providing categorizations of qualitative open-ended response data. In particular, a qualitative index of Transformative Goals, TG, was developed to complement the previously developed quantitative index of Transformative Goal Attainment (TGA), and RDA procedures for calculating reliability and content validity were refined. Second, as a Stage I pilot/feasibility study this study preliminarily examined the potentially mediating role of empowerment, as indexed by the TGAS, in the promotion of positive development. ^ Fifty-seven participants took part in this study, forty CLP intervention participants and seventeen control condition participants. All 57 participants were administered the study's measures just prior to and just following the fall 2003 semester. This study thus used a short-term longitudinal quasi-experimental research design with a comparison control group. ^ RDA procedures were refined and applied to the categorization of open-ended response data regarding participants' transformative goals (TG) and future possible selves (PSQ-QE). These analyses revealed relatively strong, indirect evidence for the construct validity of the categories as well as their theoretically meaningful structural organization, thereby providing sufficient support for the utility of RDA procedures in the categorization of qualitative open-ended response data. ^ In addition, transformative goals (TG) and future possible selves (PSQ-QE), and the quantitative index of perceived goal attainment (TGA) were evaluated as potential mediators of positive development by testing their relationships to other indices of positive intervention outcome within a four-step method involving both analysis of variance (ANOVA and RMANOVAs) and regression analysis. Though more limited in scope than the efforts at the development and refinement of the measures of these mediators, the results were also promising. ^
Resumo:
Colombia's increasingly effective efforts to mitigate the power of the FARC and other illegitimately armed groups in the country can offer important lessons for the Peruvian government as it strives to prevent a resurgence of Sendero Luminoso and other illegal non-state actors. Both countries share certain particular challenges: deep economic, social, and in the case of Peru ethnic divisions, the presence of and/or the effects of violent insurgencies, a large-scale narcotics production and trafficking, and a history of weak state presence in large tracts of isolated and scarcely-populated areas. Important differences exist, however in the nature of the insurgencies in the two countries, the government response to them and the nature of government and society that affects the applicability of Colombia's experience to Peru. The security threat to Panama from drug trafficking and Colombian insurgents --often a linked phenomenon-- are in many ways different from the drug/insurgent factor in Colombia itself and in Peru, although there are similar variables. Unlike the Colombian and Peruvian cases, the security threat in Panama is not directed against the state, there are no domestic elements seeking to overthrow the government -- as the case of the FARC and Sendero Luminoso, security problems have not spilled over from rural to urban areas in Panama, and there is no ideological component at play in driving the threat. Nor is drug cultivation a major factor in Panama as it is in Colombia and Peru. The key variable that is shared among all three cases is the threat of extra-state actors controlling remote rural areas or small towns where state presence is minimal. The central lesson learned from Colombia is the need to define and then address the key problem of a "sovereignity gap," lack of legitimate state presence in many part of the country. Colombia's success in broadening the presence of the national government between 2002 and the presence is owed to many factors, including an effective national strategy, improvements in the armed forces and police, political will on the part of government for a sustained effort, citizen buy-in to the national strategy, including the resolve of the elite to pay more in taxes to bring change about, and the adoption of a sequenced approach to consolidated development in conflicted areas. Control of territory and effective state presence improved citizen security, strengthened confidence in democracy and the legitimate state, promoted economic development, and helped mitigate the effect of illegal drugs. Peru can benefit from the Colombian experience especially in terms of the importance of legitimate state authority, improved institutions, gaining the support of local citizens, and furthering development to wean communities away from drugs. State coordinated "integration" efforts in Peru as practiced in Colombia have the potential for success if properly calibrated to Peruvian reality, coordinated within government, and provided with sufficient resources. Peru's traditionally weak political institutions and lack of public confidence in the state in many areas of the country must be overcome if this effort is to be successful.
Resumo:
Major portion of hurricane-induced economic loss originates from damages to building structures. The damages on building structures are typically grouped into three main categories: exterior, interior, and contents damage. Although the latter two types of damages, in most cases, cause more than 50% of the total loss, little has been done to investigate the physical damage process and unveil the interdependence of interior damage parameters. Building interior and contents damages are mainly due to wind-driven rain (WDR) intrusion through building envelope defects, breaches, and other functional openings. The limitation of research works and subsequent knowledge gaps, are in most part due to the complexity of damage phenomena during hurricanes and lack of established measurement methodologies to quantify rainwater intrusion. This dissertation focuses on devising methodologies for large-scale experimental simulation of tropical cyclone WDR and measurements of rainwater intrusion to acquire benchmark test-based data for the development of hurricane-induced building interior and contents damage model. Target WDR parameters derived from tropical cyclone rainfall data were used to simulate the WDR characteristics at the Wall of Wind (WOW) facility. The proposed WDR simulation methodology presents detailed procedures for selection of type and number of nozzles formulated based on tropical cyclone WDR study. The simulated WDR was later used to experimentally investigate the mechanisms of rainwater deposition/intrusion in buildings. Test-based dataset of two rainwater intrusion parameters that quantify the distribution of direct impinging raindrops and surface runoff rainwater over building surface — rain admittance factor (RAF) and surface runoff coefficient (SRC), respectively —were developed using common shapes of low-rise buildings. The dataset was applied to a newly formulated WDR estimation model to predict the volume of rainwater ingress through envelope openings such as wall and roof deck breaches and window sill cracks. The validation of the new model using experimental data indicated reasonable estimation of rainwater ingress through envelope defects and breaches during tropical cyclones. The WDR estimation model and experimental dataset of WDR parameters developed in this dissertation work can be used to enhance the prediction capabilities of existing interior damage models such as the Florida Public Hurricane Loss Model (FPHLM).^
Resumo:
Accounting students become practitioners facing ethical decision-making challenges that can be subject to various interpretations; hence, the profession is concerned with the appropriateness of their decisions. Moral development of these students has implications for a profession under legal challenges, negative publicity, and government scrutiny. Accounting students moral development has been studied by examining their responses to moral questions in Rest's Defining Issues Test (DIT), their professional attitudes on Hall's Professionalism Scale Dimensions, and their ethical orientation-based professional commitment and ethical sensitivity. This study extended research in accounting ethics and moral development by examining students in a college where an ethics course is a requirement for graduation. Knowledge of differences in the moral development of accounting students may alert practitioners and educators to potential problems resulting from a lack of ethical understanding as measured by moral development levels. If student moral development levels differ by major, and accounting majors have lower levels than other students, the conclusion may be that this difference is a causative factor for the alleged acts of malfeasance in the profession that may result in malpractice suits. The current study compared 205 accounting, business, and nonbusiness students from a private university. In addition to academic major and completion of an ethics course, the other independent variable was academic level. Gender and age were tested as control variables and Rest's DIT score was the dependent variable. The primary analysis was a 2x3x3 ANOVA with post hoc tests for results with significant p-value of less than 0.05. The results of this study reveal that students who take an ethics course appear to have a higher level of moral development (p=0.013), as measured by the (DIT), than students at the same academic level who have not taken an ethics course. In addition, a statistically significant difference (p=0.034) exists between freshmen who took an ethics class and juniors who did not take an ethics class. For every analysis except one, the lower class year with an ethics class had a higher level of moral development than the higher class year without an ethics class. These results appear to show that ethics education in particular has a greater effect on the level of moral development than education in general. Findings based on the gender specific analyses appear to show that males and females respond differently to the effects of taking an ethics class. The male students do not appear to increase their moral development level after taking an ethics course (p=0.693) but male levels of moral development differ significantly (p=0.003) by major. Female levels of moral development appear to increase after taking an ethics course (p=0.002). However, they do not differ according to major (p=0.0 97). These findings indicate that accounting students should be required to have a class in ethics as part of their college curriculum. Students with an ethics class have a significantly higher level of moral development. The challenges facing the profession at the current time indicate that public confidence in the reports of client corporations has eroded and one way to restore this confidence could be to require ethics training of future accountants.
Resumo:
The goal of this thesis was to develop, construct, and validate the Perceived Economic Burden scale to quantitatively measure the burden associated with a subtype Arrhythmogenic Right Ventricular Cardiomyopathy (ARVC) in families from the island of Newfoundland. An original 76 item self-administered survey was designed using content from existing literature as well as themes from qualitative research conducted by our team and distributed to individuals of families known to be at risk for the disease. A response rate of 37.2% (n = 64) was achieved between December 2013 and May 2014. Tests for data quality, Likert scale assumptions and scale reliability were conducted and provided preliminary evidence of the psychometric properties of the final constructed perceived economic burden of ARVC scale comprising 62 items in five sections. Findings indicated that being an affected male was a significant predictor of increased perceived economic burden in the majority of economic burden measures. Affected males also reported an increased likelihood of going on disability and difficulty obtaining insurance. Affected females also had an increased perceived financial burden. Preliminary results suggest that a perceived economic burden exists within the ARVC population in Newfoundland.
Resumo:
The social media classification problems draw more and more attention in the past few years. With the rapid development of Internet and the popularity of computers, there is astronomical amount of information in the social network (social media platforms). The datasets are generally large scale and are often corrupted by noise. The presence of noise in training set has strong impact on the performance of supervised learning (classification) techniques. A budget-driven One-class SVM approach is presented in this thesis that is suitable for large scale social media data classification. Our approach is based on an existing online One-class SVM learning algorithm, referred as STOCS (Self-Tuning One-Class SVM) algorithm. To justify our choice, we first analyze the noise-resilient ability of STOCS using synthetic data. The experiments suggest that STOCS is more robust against label noise than several other existing approaches. Next, to handle big data classification problem for social media data, we introduce several budget driven features, which allow the algorithm to be trained within limited time and under limited memory requirement. Besides, the resulting algorithm can be easily adapted to changes in dynamic data with minimal computational cost. Compared with two state-of-the-art approaches, Lib-Linear and kNN, our approach is shown to be competitive with lower requirements of memory and time.
Resumo:
The authors wish to acknowledge the valuable comments and suggestions made by members of the Committee of Fisheries of the European Parliament. The authors also thank the financial support of the European Parliament (IP/B/PECH/IC/2014–084) and the assistance of Ojama Priit and Marcus Brewer. SV acknowledges the financial support from the Spanish Agency for International Development Cooperation (AECID) (Grant no 11-CAP2–1406) and the Galician Government (Consellería de Cultura, Educación e Ordenación Universitaria, Xunta de Galicia) (Grant no R2014/023). MC acknowledges the financial support from the European Commission through the Marie Curie Career Integration Grant Fellowships – PCIG10-GA-2011–303534 - to the BIOWEB project. CP and GP acknowledge the financial support of Caixa Geral de Depósitos (Portugal) and the University of Aveiro. CP would also like to acknowledge FCT/MEC national funds and FEDER co-funding, within the PT2020 partnership Agreement and Compete 2020, for the financial support to CESAM (Grant no UID/AMB/50017/2013). JMDR and JGC thanks the financial support from the European Commission (MINOW H2020-SFS-2014–2, No 634495) and Xunta de Galicia (GRC 2015/014 and ECOBAS). MA acknowledges financial aid of Xunta de Galicia through Project GPC 2013–045. URS and CP acknowledge the Too Big to Ignore Partnership supported by the Social Sciences and Humanities Research Council of Canada (SSHRC).
Resumo:
The Bakken region of North Dakota and Montana has experienced perhaps the greatest effects of increased oil and gas development in the United States, with major implications for local governments. Though development of the Bakken began in the early 2000s, large-scale drilling and population growth dramatically affected the region from roughly 2008 through today. This case study examines the local government fiscal benefits and challenges experienced by Dunn County and Watford City, which lie near the heart of the producing region. For both local governments, the initial growth phase presented major fiscal challenges due to rapidly expanding service demands and insufficient revenue. In the following years, these challenges eased as demand for services slowed due to declining industry activity and state tax policies redirected more funds to localities. Looking forward, both local governments describe their fiscal health as stronger because of the Bakken boom, though higher debt loads and an economy heavily dependent on the volatile oil and gas industry each pose challenges for future fiscal stability.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The evolution of reproductive strategies involves a complex calculus of costs and benefits to both parents and offspring. Many marine animals produce embryos packaged in tough egg capsules or gelatinous egg masses attached to benthic surfaces. While these egg structures can protect against environmental stresses, the packaging is energetically costly for parents to produce. In this series of studies, I examined a variety of ecological factors affecting the evolution of benthic development as a life history strategy. I used marine gastropods as my model system because they are incredibly diverse and abundant worldwide, and they exhibit a variety of reproductive and developmental strategies.
The first study examines predation on benthic egg masses. I investigated: 1) behavioral mechanisms of predation when embryos are targeted (rather than the whole egg mass); 2) the specific role of gelatinous matrix in predation. I hypothesized that gelatinous matrix does not facilitate predation. One study system was the sea slug Olea hansineensis, an obligate egg mass predator, feeding on the sea slug Haminoea vesicula. Olea fed intensely and efficiently on individual Haminoea embryos inside egg masses but showed no response to live embryos removed from gel, suggesting that gelatinous matrix enables predation. This may be due to mechanical support of the feeding predator by the matrix. However, Haminoea egg masses outnumber Olea by two orders of magnitude in the field, and each egg mass can contain many tens of thousands of embryos, so predation pressure on individuals is likely not strong. The second system involved the snail Nassarius vibex, a non-obligate egg mass predator, feeding on the polychaete worm Clymenella mucosa. Gel neither inhibits nor promotes embryo predation for Nassarius, but because it cannot target individual embryos inside an egg mass, its feeding is slow and inefficient, and feeding rates in the field are quite low. However, snails that compete with Nassarius for scavenged food have not been seen to eat egg masses in the field, leaving Nassarius free to exploit the resource. Overall, egg mass predation in these two systems likely benefits the predators much more than it negatively affects the prey. Thus, selection for environmentally protective aspects of egg mass production may be much stronger than selection for defense against predation.
In the second study, I examined desiccation resistance in intertidal egg masses made by Haminoea vesicula, which preferentially attaches its flat, ribbon-shaped egg masses to submerged substrata. Egg masses occasionally detach and become stranded on exposed sand at low tide. Unlike adults, the encased embryos cannot avoid desiccation by selectively moving about the habitat, and the egg mass shape has high surface-area-to-volume ratio that should make it prone to drying out. Thus, I hypothesized that the embryos would not survive stranding. I tested this by deploying individual egg masses of two age classes on exposed sand bars for the duration of low tide. After rehydration, embryos midway through development showed higher rates of survival than newly-laid embryos, though for both stages survival rates over 25% were frequently observed. Laboratory desiccation trials showed that >75% survival is possible in an egg mass that has lost 65% of its water weight, and some survival (<25%) was observed even after 83% water weight lost. Although many surviving embryos in both experiments showed damage, these data demonstrate that egg mass stranding is not necessarily fatal to embryos. They may be able to survive a far greater range of conditions than they normally encounter, compensating for their lack of ability to move. Also, desiccation tolerance of embryos may reduce pressure on parents to find optimal laying substrata.
The third study takes a big-picture approach to investigating the evolution of different developmental strategies in cone snails, the largest genus of marine invertebrates. Cone snail species hatch out of their capsules as either swimming larvae or non-dispersing forms, and their developmental mode has direct consequences for biogeographic patterns. Variability in life history strategies among taxa may be influenced by biological, environmental, or phylogenetic factors, or a combination of these. While most prior research has examined these factors singularly, my aim was to investigate the effects of a host of intrinsic, extrinsic, and historical factors on two fundamental aspects of life history: egg size and egg number. I used phylogenetic generalized least-squares regression models to examine relationships between these two egg traits and a variety of hypothesized intrinsic and extrinsic variables. Adult shell morphology and spatial variability in productivity and salinity across a species geographic range had the strongest effects on egg diameter and number of eggs per capsule. Phylogeny had no significant influence. Developmental mode in Conus appears to be influenced mostly by species-level adaptations and niche specificity rather than phylogenetic conservatism. Patterns of egg size and egg number appear to reflect energetic tradeoffs with body size and specific morphologies as well as adaptations to variable environments. Overall, this series of studies highlights the importance of organism-scale biotic and abiotic interactions in evolutionary patterns.
Resumo:
Optical coherence tomography (OCT) is a noninvasive three-dimensional interferometric imaging technique capable of achieving micrometer scale resolution. It is now a standard of care in ophthalmology, where it is used to improve the accuracy of early diagnosis, to better understand the source of pathophysiology, and to monitor disease progression and response to therapy. In particular, retinal imaging has been the most prevalent clinical application of OCT, but researchers and companies alike are developing OCT systems for cardiology, dermatology, dentistry, and many other medical and industrial applications.
Adaptive optics (AO) is a technique used to reduce monochromatic aberrations in optical instruments. It is used in astronomical telescopes, laser communications, high-power lasers, retinal imaging, optical fabrication and microscopy to improve system performance. Scanning laser ophthalmoscopy (SLO) is a noninvasive confocal imaging technique that produces high contrast two-dimensional retinal images. AO is combined with SLO (AOSLO) to compensate for the wavefront distortions caused by the optics of the eye, providing the ability to visualize the living retina with cellular resolution. AOSLO has shown great promise to advance the understanding of the etiology of retinal diseases on a cellular level.
Broadly, we endeavor to enhance the vision outcome of ophthalmic patients through improved diagnostics and personalized therapy. Toward this end, the objective of the work presented herein was the development of advanced techniques for increasing the imaging speed, reducing the form factor, and broadening the versatility of OCT and AOSLO. Despite our focus on applications in ophthalmology, the techniques developed could be applied to other medical and industrial applications. In this dissertation, a technique to quadruple the imaging speed of OCT was developed. This technique was demonstrated by imaging the retinas of healthy human subjects. A handheld, dual depth OCT system was developed. This system enabled sequential imaging of the anterior segment and retina of human eyes. Finally, handheld SLO/OCT systems were developed, culminating in the design of a handheld AOSLO system. This system has the potential to provide cellular level imaging of the human retina, resolving even the most densely packed foveal cones.
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.
Resumo:
Concern for the sustainability of our planet is widespread. The ever-increasing economic activity and large scale industralisation our consumer society requires has increased concerns among academics, politicians, and consumers alike on natural resource depletion, waste management, dangers of toxic chemicals, and climate change. Human consumption is causing major issues for the space we inhabit. Much work has been done over the past four decades to remedy human impact on our environment at corporate, policy and consumer level. But concerns on our ability to progress the sustainability agenda remain. Consumer behaviour plays a pivotal role in sustainable development. In light of this, we need to explore and understand the ways in which consumption occurs in consumers lives, with an aim to changing behaviours that do not support the natural environment. Questions on how to change consumer behaviour dominate much of the literature on sustainable consumption, but substantial behaviour change among individuals has not occurred as predicted. Some focus has shifted to look at upstream interventions, such as education. The Green-Schools Programme (known internationally as Eco-Schools) is one such intervention. The aim of this thesis was to explore consumption in the context of the Green-Schools Programme. The main research question asks: in the context of the Green-Schools, how are sustainable behaviour practices developed in the home? The findings presented in this thesis show that sustainable behaviour has developed in the home from both internal and external factors, the Green-Schools effect being one such factor; the programme does influence behaviour in the home context to some degree. One of the main findings of this research indicates that schoolchildren are imparting ‘positive pester power’ on household behaviour practices and the majority of households are passively practicing sustainable consumption. These findings contribute to knowledge on sustainable consumption in the home context.
Resumo:
A recently developed novel biomass fuel pellet, the Q’ Pellet, offers significant improvements over conventional white pellets, with characteristics comparable to those of coal. The Q’ Pellet was initially created at bench scale using a proprietary die and punch design, in which the biomass was torrefied in-situ¬ and then compressed. To bring the benefits of the Q’ Pellet to a commercial level, it must be capable of being produced in a continuous process at a competitive cost. A prototype machine was previously constructed in a first effort to assess continuous processing of the Q’ Pellet. The prototype torrefied biomass in a separate, ex-situ reactor and transported it into a rotary compression stage. Upon evaluation, parts of the prototype were found to be unsuccessful and required a redesign of the material transport method as well as the compression mechanism. A process was developed in which material was torrefied ex-situ and extruded in a pre-compression stage. The extruded biomass overcame multiple handling issues that had been experienced with un-densified biomass, facilitating efficient material transport. Biomass was extruded directly into a novel re-designed pelletizing die, which incorporated a removable cap, ejection pin and a die spring to accommodate a repeatable continuous process. Although after several uses the die required manual intervention due to minor design and manufacturing quality limitations, the system clearly demonstrated the capability of producing the Q’ Pellet in a continuous process. Q’ Pellets produced by the pre-compression method and pelletized in the re-designed die had an average dry basis gross calorific value of 22.04 MJ/kg, pellet durability index of 99.86% and dried to 6.2% of its initial mass following 24 hours submerged in water. This compares well with literature results of 21.29 MJ/kg, 100% pellet durability index and <5% mass increase in a water submersion test. These results indicate that the methods developed herein are capable of producing Q’ Pellets in a continuous process with fuel properties competitive with coal.