862 resultados para Algorithms, Properties, the KCube Graphs
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
Resumo:
Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.
Resumo:
Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.
The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.
First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.
My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.
Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.
My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.
In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.
Resumo:
Changes in retailing over the last half century have had a detrimental effect on the UK’s local high streets. The recession of 2008-2012 exacerbated these trends leading to a high number of vacancies and neglected properties. The impact was sufficiently severe for the term ‘crisis’ to be used in connection with the British high street. In the academic and commercial reports generated by the recognition that the high street needed to adapt to changing circumstances, a view emerged that the leisure component of high street activity would gain in importance. This article reviews the relationship of the evening and nighttime economy to the high street and considers its potential in reinventing the vitality that is normally associated with these mixed-use urban corridors. The article argues that there is hope in the high street offering a different type of experience to the mainstream forms of entertainment that are consolidating in major town and city centres. It concludes by suggesting that for this to be successful, some public support is necessary.
Resumo:
One of a number of published commentaries contributing to the mid-eighteenth century debate concerning the nature of literary property. The author of An Enquiry sought to repudiate the concept of a natural authorial property right existing at common law. In so doing, he specifically engaged with various aspects of William Warburton's earlier commentary (see: uk_1747), as well as presenting arguments that drew upon the nature of property in general, the differences between the right claimed by proponents of the common law right and other acknowledged incorporeal properties, the similarities between patents and copyright, the history of literary property, the experience of other jurisdictions (drawing upon Venice in particular), and the consequences that would follow from conceding the existence of a perpetual right both for authors in particular and society in general. This commentary, in turn, drew its own response in the guise of A Vindication of the Exclusive Rights of Authors, to their own work (1762).
Resumo:
Green composites are important class of biocomposites widely explored due to their enhanced properties. The biodegradable polymeric material is reinforced with natural fibers to form a composite that is eco-friendly and environment sustainable. The green composites have potential to attract the traditional petroleum-based composites which are toxic and nonbiodegradable. The green composites eliminate the traditional materials such as steel and wood with biodegradable polymer composites. The degradable and environment-friendly green composites were prepared by various fabrication techniques. The various properties of different fiber composite were studied as reinforcement for fully biodegradable and environmental-friendly green composites.
Resumo:
The building sector requires the worldwide production of 4 billion tonnes of cement annually, consuming more than 40% of global energy and accounting for about 8% of the total CO2 emissions. The SUS-CON project aimed at integrating waste materials in the production cycle of concrete, for both ready-mixed and pre-cast applications, resulting in an innovative light-weight, ecocompatible and cost-effective construction material, made by all-waste materials and characterized by enhanced thermal insulation performance and low embodied energy and CO2. Alkali activated “cementless” binders, which have recently emerged as eco-friendly construction materials, were used in conjunction with lightweight recycled aggregates to produce sustainable concrete for a range of applications. This paper presents some results from the development of a concrete made with a geopolymeric binder (alkali activated fly ash) and aggregate from recycled mixed plastic. Mix optimisation was achieved through an extensive investigation on production parameters for binder and aggregate. The mix recipe was developed for achieving the required fresh and hardened properties. The optimised mix gave compressive strength of about 7 MPa, flexural strength of about 1.3 MPa and a thermal conductivity of 0.34 W/mK. Fresh and hardened properties were deemed suitable for the industrial production of precast products. Precast panels were designed and produced for the construction of demonstration buildings. Mock-ups of about 2.5 x 2.5 x 2.5 m were built at a demo park in Spain both with SUS-CON and Portland cement concrete, monitoring internal and external temperatures. Field results indicate that the SUS-CON mock-ups have better insulation. During the warmest period of the day, the measured temperature in the SUS-CON mock-ups was lower.
Resumo:
Digital Image Processing is a rapidly evolving eld with growing applications in Science and Engineering. It involves changing the nature of an image in order to either improve its pictorial information for human interpretation or render it more suitable for autonomous machine perception. One of the major areas of image processing for human vision applications is image enhancement. The principal goal of image enhancement is to improve visual quality of an image, typically by taking advantage of the response of human visual system. Image enhancement methods are carried out usually in the pixel domain. Transform domain methods can often provide another way to interpret and understand image contents. A suitable transform, thus selected, should have less computational complexity. Sequency ordered arrangement of unique MRT (Mapped Real Transform) coe cients can give rise to an integer-to-integer transform, named Sequency based unique MRT (SMRT), suitable for image processing applications. The development of the SMRT from UMRT (Unique MRT), forward & inverse SMRT algorithms and the basis functions are introduced. A few properties of the SMRT are explored and its scope in lossless text compression is presented.
Resumo:
This study explores the potential of the simvastatin to ameliorate inflammation and infection in open infected skin wounds of rats. Methods: Fourteen Wistar rats weighing 285±12g were used. The study was done in a group whose open infected skin wounds were treated with topical application of sinvastatina microemulsion (SIM, n=7) and a second group with wounds treated with saline 0.9 % (SAL, n=7). A bacteriological exam of the wounds fluid for gram positive and gram negative bacteria, the tecidual expression of TNFá and IL-1â by imunohistochemical technique, and histological analysis by HE stain were performed. Results: The expression of TNFa could be clearly demonstrated in lower degree in skin wounds treated with simvastatin (668.6 ± 74.7 ìm2) than in saline (2120.0 ± 327.1 ìm2). In comparison, wound tissue from SIM group displayed leukocyte infiltration significantly lower than that observed in SAL group (p<0.05). Culture results of the samples taken from wound fluid on fourth post treatment day revealed wound infection in only one rat of group simvastatin (SIM), where Proteus mirabilis, Escherchia coli and Enterobacter sp were isolated. In the rats whose wounds were treated with saline (SAL), polymicrobial infection with more than 100,000 CFU/g was detected in all the wounds. Conclusion: In addition to its antiinflammatory properties, the protective effects of simvastatin in infected open skin wounds is able to reduce infection and probably has antibacterial action. The potential to treat these wounds with statins to ameliorate inflammation and infection is promising
Resumo:
[Excerpt] The Editorial Team is proud to release this 2016 14th Annual Volume of the Cornell Real Estate Review. This year’s issue explores a wide range of topics, including the deployment of new technologies in multifamily properties, the effects of autonomous vehicles on real estate, and the continued ramifications of the housing crisis through the legal tactics of certain mortgage lenders. Also included, a recent repositioning project– the unique turnaround of a former casino hotel property in Reno, Nevada. Furthermore, this release includes a discussion of value-added multifamily investment strategy, an analysis of the impact of rapid transit on the residential market in Hudson County, New Jersey, and a summary of federal affordable housing incentive programs in the United States. This year’s Pathways features an interview with Toll Brothers Division President Karl Mistry (Baker ’04), and the Baker Viewpoint piece explores the concept of curtailment mortgages.
Resumo:
We study the impact of S&P index membership on REIT stock returns. Given the hybrid nature of REITs, their returns may become more like those of other indexed stocks and less like those of their underlying properties. The existing literature does not offer clear predictions on these potential outcomes. Taking advantage of the inclusion of REITs in major S&P indexes starting in 2001, we find that shared index membership significantly increases the correlation between REIT returns after controlling for the stock characteristics that determine index membership. We also document that index membership enhances the link between REIT stock returns and the performance of the underlying real estate, consistent with improved pricing efficiency. REIT investors appear to be able to enjoy the benefits of improved visibility and liquidity associated with index membership as well as the exposure to underlying real estate markets and the related benefits of diversification.
Resumo:
This paper discusses areas for future research opportunities by addressing accounting issues faced by management accountants practicing in hospitality organizations. Specifically, the article focuses on the use of the uniform system of accounts by operating properties, the usefulness of allocating support costs to operated departments, extending our understanding of operating costs and performance measurement systems and the certification of practicing accountants.
Resumo:
The aim of this work was to study the convective drying of anchovy (Engraulis anchoita) fillets and to evaluate the final product characteristics through its biochemical and functional properties. The drying temperatures were of 50, 60 and 70°C, and the fillet samples were dried with the skins down (with air flow one or the two sides) and skins up (with air flow one side). The drying experimental data were analyzed by Henderson–Pabis model, which showed a good fit (R2 > 0.99 and REQM < 0.05). The moisture effective diffusivity values ranged from 4.1 10–10 to 8.6 10–10 m2 s−1 with the skin down and 2.2 10–10 to 5.5 10–10 m2 s−1 with the skin up, and the activation energy values were 32.2 and 38.4 kJ mol−1, respectively. The product characteristics were significantly affected (p < 0.05) by drying operation conditions. The lower change was in drying at 60°C with air flow for two sides of the samples and skin up. In this condition, the product showed solubility 22.3%; in vitro digestibility 87.4%; contents of available lysine and methionine 7.21 and 2.64 g 100 g−1, respectively; TBA value 1.16 mgMDA kg−1; specific antioxidant activity was 1.91 mMDPPH g−1 min−1, and variation total color was 10.72.
Resumo:
Understanding how biodiversity spatially distribute over both the short term and long term, and what factors are affecting the distribution, are critical for modeling the spatial pattern of biodiversity as well as for promoting effective conservation planning and practices. This dissertation aims to examine factors that influence short-term and long-term avian distribution from the geographical sciences perspective. The research develops landscape level habitat metrics to characterize forest height heterogeneity and examines their efficacies in modelling avian richness at the continental scale. Two types of novel vegetation-height-structured habitat metrics are created based on second order texture algorithms and the concepts of patch-based habitat metrics. I correlate the height-structured metrics with the richness of different forest guilds, and also examine their efficacies in multivariate richness models. The results suggest that height heterogeneity, beyond canopy height alone, supplements habitat characterization and richness models of two forest bird guilds. The metrics and models derived in this study demonstrate practical examples of utilizing three-dimensional vegetation data for improved characterization of spatial patterns in species richness. The second and the third projects focus on analyzing centroids of avian distributions, and testing hypotheses regarding the direction and speed of these shifts. I first showcase the usefulness of centroids analysis for characterizing the distribution changes of a few case study species. Applying the centroid method on 57 permanent resident bird species, I show that multi-directional distribution shifts occurred in large number of studied species. I also demonstrate, plain birds are not shifting their distribution faster than mountain birds, contrary to the prediction based on climate change velocity hypothesis. By modelling the abundance change rate at regional level, I show that extreme climate events and precipitation measures associate closely with some of the long-term distribution shifts. This dissertation improves our understanding on bird habitat characterization for species richness modelling, and expands our knowledge on how avian populations shifted their ranges in North America responding to changing environments in the past four decades. The results provide an important scientific foundation for more accurate predictive species distribution modeling in future.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.