948 resultados para segmentation and reverberation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A neural network model of synchronized oscillator activity in visual cortex is presented in order to account for recent neurophysiological findings that such synchronization may reflect global properties of the stimulus. In these recent experiments, it was reported that synchronization of oscillatory firing responses to moving bar stimuli occurred not only for nearby neurons, but also occurred between neurons separated by several cortical columns (several mm of cortex) when these neurons shared some receptive field preferences specific to the stimuli. These results were obtained not only for single bar stimuli but also across two disconnected, but colinear, bars moving in the same direction. Our model and computer simulations obtain these synchrony results across both single and double bar stimuli. For the double bar case, synchronous oscillations are induced in the region between the bars, but no oscillations are induced in the regions beyond the stimuli. These results were achieved with cellular units that exhibit limit cycle oscillations for a robust range of input values, but which approach an equilibrium state when undriven. Single and double bar synchronization of these oscillators was achieved by different, but formally related, models of preattentive visual boundary segmentation and attentive visual object recognition, as well as nearest-neighbor and randomly coupled models. In preattentive visual segmentation, synchronous oscillations may reflect the binding of local feature detectors into a globally coherent grouping. In object recognition, synchronous oscillations may occur during an attentive resonant state that triggers new learning. These modelling results support earlier theoretical predictions of synchronous visual cortical oscillations and demonstrate the robustness of the mechanisms capable of generating synchrony.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Biological processes occur on a vast range of time scales, and many of them occur concurrently. As a result, system-wide measurements of gene expression have the potential to capture many of these processes simultaneously. The challenge however, is to separate these processes and time scales in the data. In many cases the number of processes and their time scales is unknown. This issue is particularly relevant to developmental biologists, who are interested in processes such as growth, segmentation and differentiation, which can all take place simultaneously, but on different time scales. RESULTS: We introduce a flexible and statistically rigorous method for detecting different time scales in time-series gene expression data, by identifying expression patterns that are temporally shifted between replicate datasets. We apply our approach to a Saccharomyces cerevisiae cell-cycle dataset and an Arabidopsis thaliana root developmental dataset. In both datasets our method successfully detects processes operating on several different time scales. Furthermore we show that many of these time scales can be associated with particular biological functions. CONCLUSIONS: The spatiotemporal modules identified by our method suggest the presence of multiple biological processes, acting at distinct time scales in both the Arabidopsis root and yeast. Using similar large-scale expression datasets, the identification of biological processes acting at multiple time scales in many organisms is now possible.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a child's natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical and large population research purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by tracking facial features, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinician's behavioral observations obtained from real in-clinic assessments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One habitat management requirement forced by 21st century relative sea-level rise (RSLR), will be the need to re-comprehend the dimensions of long-term transgressive behaviour of coastal systems being forced by such RSLR. Fresh approaches to the conceptual modelling and subsequent implementation of new coastal and peri-marine habitats will be required. There is concern that existing approaches to forecasting coastal systems development (and by implication their associated scarce coastal habitats) over the next century depend on a certain premise of orderly spatial succession of habitats. This assumption is shown to be questionable given the possible future rates of RSLR, magnitude of shoreline retreat and the lack of coastal sediment to maintain the protective morphologies to low-energy coastal habitats. Of these issues, sediment deficiency is regarded as one of the major problem for future habitat development. Examples of contemporary behaviour of UK coasts show evidence of coastal sediment starvation resulting from relatively stable RSLR, anthropogenic sealing of coastal sources, and intercepted coastal sediment pathways, which together force segmentation of coastal systems. From these examples key principles are deduced which may prejudice the existence of future habitats: accelerated future sediment demand due to RSLR may not be met by supply and, if short- to medium-term hold-the-line policies predominate, long-term strategies for managed realignment and habitat enhancement may prove impossible goals. Methods of contemporary sediment husbandry may help sustain some habitats in place but otherwise, instead of integrated coastal organization, managers may need to consider coastal breakdown, segmentation and habitat reduction as the basis of 21st century coastal evolution and planning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Oscillations in network bright points (NBPs) are studied at a variety of chromospheric heights. In particular, the three-dimensional variation of NBP oscillations is studied using image segmentation and cross-correlation analysis between images taken in light of Ca II K3, Ha core, Mg I b2, and Mg I b1-0.4 Å. Wavelet analysis is used to isolate wave packets in time and to search for height-dependent time delays that result from upward- or downward-directed traveling waves. In each NBP studied, we find evidence for kink-mode waves (1.3, 1.9 mHz), traveling up through the chromosphere and coupling with sausage-mode waves (2.6, 3.8 mHz). This provides a means for depositing energy in the upper chromosphere. We also find evidence for other upward- and downward-propagating waves in the 1.3-4.6 mHz range. Some oscillations do not correspond to traveling waves, and we attribute these to waves generated in neighboring regions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper addresses the pose recovery problem of a particular articulated object: the human body. In this model-based approach, the 2D-shape is associated to the corresponding stick figure allowing the joint segmentation and pose recovery of the subject observed in the scene. The main disadvantage of 2D-models is their restriction to the viewpoint. To cope with this limitation, local spatio-temporal 2D-models corresponding to many views of the same sequences are trained, concatenated and sorted in a global framework. Temporal and spatial constraints are then considered to build the probabilistic transition matrix (PTM) that gives a frame to frame estimation of the most probable local models to use during the fitting procedure, thus limiting the feature space. This approach takes advantage of 3D information avoiding the use of a complex 3D human model. The experiments carried out on both indoor and outdoor sequences have demonstrated the ability of this approach to adequately segment pedestrians and estimate their poses independently of the direction of motion during the sequence. (c) 2008 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent renewed interest in computational writer identification has resulted in an increased number of publications. In relation to historical musicology its application has so far been limited. One of the obstacles seems to be that the clarity of the images from the scans available for computational analysis is often not sufficient. In this paper, the use of the Hinge feature is proposed to avoid segmentation and staff-line removal for effective feature extraction from low quality scans. The use of an auto encoder in Hinge feature space is suggested as an alternative to staff-line removal by image processing, and their performance is compared. The result of the experiment shows an accuracy of 87 % for the dataset containing 84 writers’ samples, and superiority of our segmentation and staff-line removal free approach. Practical analysis on Bach’s autograph manuscript of the Well-Tempered Clavier II (Additional MS. 35021 in the British Library, London) is also presented and the extensive applicability of our approach is demonstrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

O presente trabalho de investigação visa propor uma metodologia de elaboração de uma base de dados terminológica destinada a um público não- -especialista, e surge como resposta à necessidade de transmissão de informação ao consumidor, fruto de falta de – ou parca – compreensão do mesmo, relativa a géneros alimentícios com alegações de saúde disponíveis no mercado: os denominados alimentos funcionais. A proposta metodológica de segmentação e caracterização do processo terminográfico, baseada no modelo desenvolvido por Gouadec, para organização do processo global de tradução, encontra-se organizada em três fases – pré-terminografia, terminografia e pós-terminografia –, e compreende três vertentes de análise – uma vertente conceptual, uma vertente comunicativa e uma vertente textual. Em termos gerais, na fase de pré-terminografia é desenvolvido um trabalho preparatório – de familiarização com a área de especialidade e de delimitação da subárea de especialidade, de identificação dos contextos comunicativos e de constituição de corpora especializados – essencial à subsequente fase executória – fase de terminografia – de elaboração do recurso terminológico. A última fase – fase de pós-terminografia – compreende o desenvolvimento de esforços com vista à aplicação industrial do recurso, assim como a sua posterior constante actualização. Constituem objecto de análise do presente trabalho as duas primeiras fases supramencionadas e as etapas que as constituem. A consideração de três vertentes de análise é, de igual forma, relevante.Tal facto é demonstrado ao longo do processo terminográfico, designadamente a nível da análise das repercussões, na fase de terminografia, de cada uma destas vertentes, consideradas já na fase de pré-terminografia. Com este trabalho de investigação pretendemos demonstrar o papel social da Terminologia, no contributo que pode prestar na divulgação de ciência, concretamente através da apresentação de uma proposta de uma base de dados terminológica sobre alimentos funcionais para o consumidor – a AlF Beta. Do mesmo modo, temos por objectivo contribuir a nível da reflexão teórica e metodológica em Terminologia, nomeadamente no que concerne a sua vertente aplicada, através da elaboração de recursos terminológicos destinados a públicos não-especialistas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Coronary CT angiography is widely used in clinical practice for the assessment of coronary artery disease. Several studies have shown that the same exam can also be used to assess left ventricle (LV) function. LV function is usually evaluated using just the data from end-systolic and end-diastolic phases even though coronary CT angiography (CTA) provides data concerning multiple cardiac phases, along the cardiac cycle. This unused wealth of data, mostly due to its complexity and the lack of proper tools, has still to be explored in order to assess if further insight is possible regarding regional LV functional analysis. Furthermore, different parameters can be computed to characterize LV function and while some are well known by clinicians others still need to be evaluated concerning their value in clinical scenarios. The work presented in this thesis covers two steps towards extended use of CTA data: LV segmentation and functional analysis. A new semi-automatic segmentation method is presented to obtain LV data for all cardiac phases available in a CTA exam and a 3D editing tool was designed to allow users to fine tune the segmentations. Regarding segmentation evaluation, a methodology is proposed in order to help choose the similarity metrics to be used to compare segmentations. This methodology allows the detection of redundant measures that can be discarded. The evaluation was performed with the help of three experienced radiographers yielding low intraand inter-observer variability. In order to allow exploring the segmented data, several parameters characterizing global and regional LV function are computed for the available cardiac phases. The data thus obtained is shown using a set of visualizations allowing synchronized visual exploration. The main purpose is to provide means for clinicians to explore the data and gather insight over their meaning, as well as their correlation with each other and with diagnosis outcomes. Finally, an interactive method is proposed to help clinicians assess myocardial perfusion by providing automatic assignment of lesions, detected by clinicians, to a myocardial segment. This new approach has obtained positive feedback from clinicians and is not only an improvement over their current assessment method but also an important first step towards systematic validation of automatic myocardial perfusion assessment measures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this work, a comprehensive review on automatic analysis of Proteomics and Genomics images is presented. Special emphasis is given to a particularly complex image produced by a technique called Two-Dimensional Gel Electrophoresis (2-DE), with thousands of spots (or blobs). Automatic methods for the detection, segmentation and matching of blob like features are discussed and proposed. In particular, a very robust procedure was achieved for processing 2-DE images, consisting mainly of two steps: a) A very trustworthy new approach for the automatic detection and segmentation of spots, based on the Watershed Transform, without any foreknowledge of spot shape or size, and without user intervention; b) A new method for spot matching, based on image registration, that performs well for either global or local distortions. The results of the proposed methods are compared to state-of-the-art academic and commercial products.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tese de doutoramento, Ciências do Mar, da Terra e do Ambiente (Avaliação e Gestão de Recursos), Faculdade das Ciências do Mar e do Ambiente, Universidade do Algarve, 2013

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2015