6 resultados para Learning to program
em Duke University
Resumo:
OBJECTIVE: To pilot test if Orthopaedic Surgery residents could self-assess their performance using newly created milestones, as defined by the Accreditation Council on Graduate Medical Education. METHODS: In June 2012, an email was sent to Program Directors and administrative coordinators of the 154 accredited Orthopaedic Surgery Programs, asking them to send their residents a link to an online survey. The survey was adapted from the Orthopaedic Surgery Milestone Project. Completed surveys were aggregated in an anonymous, confidential database. SAS 9.3 was used to perform the analyses. RESULTS: Responses from 71 residents were analyzed. First and second year residents indicated through self-assessment that they had substantially achieved Level 1 and Level 2 milestones. Third year residents reported they had substantially achieved 30/41, and fourth year residents, all Level 3 milestones. Fifth year, graduating residents, reported they had substantially achieved 17 Level 4 milestones, and were extremely close on another 15. No milestone was rated at Level 5, the maximum possible. Earlier in training, Patient Care and Medical Knowledge milestones were rated lower than the milestones reflecting the other four competencies of Practice Based Learning and Improvement, Systems Based Practice, Professionalism, and Interpersonal Communication. The gap was closed by the fourth year. CONCLUSIONS: Residents were able to successfully self-assess using the 41 Orthopaedic Surgery milestones. Respondents' rate improved proficiency over time. Graduating residents report they have substantially, or close to substantially, achieved all Level 4 milestones. Milestone self-assessment may be a useful tool as one component of a program's overall performance assessment strategy.
Resumo:
Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables full spectrum CT in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical eects in the detector and are very noisy due to photon starvation. In this work, we proposed two methods based on machine learning to address the spectral distortion issue and to improve the material decomposition. This rst approach is to model distortions using an articial neural network (ANN) and compensate for the distortion in a statistical reconstruction. The second approach is to directly correct for the distortion in the projections. Both technique can be done as a calibration process where the neural network can be trained using 3D printed phantoms data to learn the distortion model or the correction model of the spectral distortion. This replaces the need for synchrotron measurements required in conventional technique to derive the distortion model parametrically which could be costly and time consuming. The results demonstrate experimental feasibility and potential advantages of ANN-based distortion modeling and correction for more accurate K-edge imaging with a PCXD. Given the computational eciency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.
Resumo:
While molecular and cellular processes are often modeled as stochastic processes, such as Brownian motion, chemical reaction networks and gene regulatory networks, there are few attempts to program a molecular-scale process to physically implement stochastic processes. DNA has been used as a substrate for programming molecular interactions, but its applications are restricted to deterministic functions and unfavorable properties such as slow processing, thermal annealing, aqueous solvents and difficult readout limit them to proof-of-concept purposes. To date, whether there exists a molecular process that can be programmed to implement stochastic processes for practical applications remains unknown.
In this dissertation, a fully specified Resonance Energy Transfer (RET) network between chromophores is accurately fabricated via DNA self-assembly, and the exciton dynamics in the RET network physically implement a stochastic process, specifically a continuous-time Markov chain (CTMC), which has a direct mapping to the physical geometry of the chromophore network. Excited by a light source, a RET network generates random samples in the temporal domain in the form of fluorescence photons which can be detected by a photon detector. The intrinsic sampling distribution of a RET network is derived as a phase-type distribution configured by its CTMC model. The conclusion is that the exciton dynamics in a RET network implement a general and important class of stochastic processes that can be directly and accurately programmed and used for practical applications of photonics and optoelectronics. Different approaches to using RET networks exist with vast potential applications. As an entropy source that can directly generate samples from virtually arbitrary distributions, RET networks can benefit applications that rely on generating random samples such as 1) fluorescent taggants and 2) stochastic computing.
By using RET networks between chromophores to implement fluorescent taggants with temporally coded signatures, the taggant design is not constrained by resolvable dyes and has a significantly larger coding capacity than spectrally or lifetime coded fluorescent taggants. Meanwhile, the taggant detection process becomes highly efficient, and the Maximum Likelihood Estimation (MLE) based taggant identification guarantees high accuracy even with only a few hundred detected photons.
Meanwhile, RET-based sampling units (RSU) can be constructed to accelerate probabilistic algorithms for wide applications in machine learning and data analytics. Because probabilistic algorithms often rely on iteratively sampling from parameterized distributions, they can be inefficient in practice on the deterministic hardware traditional computers use, especially for high-dimensional and complex problems. As an efficient universal sampling unit, the proposed RSU can be integrated into a processor / GPU as specialized functional units or organized as a discrete accelerator to bring substantial speedups and power savings.
Resumo:
Cumulon is a system aimed at simplifying the development and deployment of statistical analysis of big data in public clouds. Cumulon allows users to program in their familiar language of matrices and linear algebra, without worrying about how to map data and computation to specific hardware and cloud software platforms. Given user-specified requirements in terms of time, monetary cost, and risk tolerance, Cumulon automatically makes intelligent decisions on implementation alternatives, execution parameters, as well as hardware provisioning and configuration settings -- such as what type of machines and how many of them to acquire. Cumulon also supports clouds with auction-based markets: it effectively utilizes computing resources whose availability varies according to market conditions, and suggests best bidding strategies for them. Cumulon explores two alternative approaches toward supporting such markets, with different trade-offs between system and optimization complexity. Experimental study is conducted to show the efficiency of Cumulon's execution engine, as well as the optimizer's effectiveness in finding the optimal plan in the vast plan space.
Resumo:
At least since the seminal works of Jacob Mincer, labor economists have sought to understand how students make higher education investment decisions. Mincer’s original work seeks to understand how students decide how much education to accrue; subsequent work by various authors seeks to understand how students choose where to attend college, what field to major in, and whether to drop out of college.
Broadly speaking, this rich sub-field of literature contributes to society in two ways: First, it provides a better understanding of important social behaviors. Second, it helps policymakers anticipate the responses of students when evaluating various policy reforms.
While research on the higher education investment decisions of students has had an enormous impact on our understanding of society and has shaped countless education policies, students are only one interested party in the higher education landscape. In the jargon of economists, students represent only the `demand side’ of higher education---customers who are choosing options from a set of available alternatives. Opposite students are instructors and administrators who represent the `supply side’ of higher education---those who decide which options are available to students.
For similar reasons, it is also important to understand how individuals on the supply side of education make decisions: First, this provides a deeper understanding of the behaviors of important social institutions. Second, it helps policymakers anticipate the responses of instructors and administrators when evaluating various reforms. However, while there is substantial literature understanding decisions made on the demand side of education, there is far less attention paid to decisions on the supply side of education.
This dissertation uses empirical evidence to better understand how instructors and administrators make decisions and the implications of these decisions for students.
In the first chapter, I use data from Duke University and a Bayesian model of correlated learning to measure the signal quality of grades across academic fields. The correlated feature of the model allows grades in one academic field to signal ability in all other fields allowing me to measure both ‘own category' signal quality and ‘spillover' signal quality. Estimates reveal a clear division between information rich Science, Engineering, and Economics grades and less informative Humanities and Social Science grades. In many specifications, information spillovers are so powerful that precise Science, Engineering, and Economics grades are more informative about Humanities and Social Science abilities than Humanities and Social Science grades. This suggests students who take engineering courses during their Freshman year make more informed specialization decisions later in college.
In the second chapter, I use data from the University of Central Arkansas to understand how universities decide which courses to offer and how much to spend on instructors for these courses. Course offerings and instructor characteristics directly affect the courses students choose and the value they receive from these choices. This chapter reveals the university preferences over these student outcomes which best explain observed course offerings and instructors. This allows me to assess whether university incentives are aligned with students, to determine what alternative university choices would be preferred by students, and to illustrate how a revenue neutral tax/subsidy policy can induce a university to make these student-best decisions.
In the third chapter, co-authored with Thomas Ahn, Peter Arcidiacono, and Amy Hopson, we use data from the University of Kentucky to understand how instructors choose grading policies. In this chapter, we estimate an equilibrium model in which instructors choose grading policies and students choose courses and study effort given grading policies. In this model, instructors set both a grading intercept and a return on ability and effort. This builds a rich link between the grading policy decisions of instructors and the course choices of students. We use estimates of this model to infer what preference parameters best explain why instructors chose estimated grading policies. To illustrate the importance of these supply side decisions, we show changing grading policies can substantially reduce the gender gap in STEM enrollment.
Resumo:
It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.