Abstract

This project implements an algorithm for VHDL implementation using spike sorting algorithm. This algorithm is based on k-means algorithm and. The main procedure includes three stages, writing the code in matlab, converting the code from matlab to VHDL, implementing the code in VHDL onto FPGA board. Firstly the given data of neurons is grouped into clusters using k-means algorithm in matlab to identify number of neurons in the given data and timing analysis at which ach neuron fires .Then, the parameters which areused in the matlab code are used and converted into VHDL using as. After that, the code is applied on FPGA board to find the power consumption. The simulation results demonstrate that the VHDL is implemented for Spike sorting algorithm.

CHAPTER 1 INTRODUCTION

Spike sorting – is the grouping of spikes into clusters based on the similarity of their shapes. Given that, in principle, each neuron tends to fire spikes of a particular shape, the resulting clusters correspond to the activity of different putative neurons. The end result of spike sorting is the determination of which spike corresponds to which of these neurons. The classification of which spike corresponds to which neuron- is a very challenging problem. Then, before tackling mathematical details and technical issues, it is important to discuss why we need to do such a job, rather than just detecting the spikes for each channel without caring from which neuron they come.

A large amount of research in Neuroscience is based on the study of the activity of neurons recorded extra cellularly with very thin electrodes implanted in animals’ brains. These micro wires ‘listen’ to a few neurons close-by the electrode tip that fire action potentials or ‘spikes’. Each neuron has spikes of a characteristic shape, which is mainly determined by the morphology of their dendrite trees and the distance and orientation relative to the recording electrode

It is already well established that complex brain processes are reflected by the activity of large neural populations and that the study of single-cells in isolation gives only a very limited view of the whole picture. Therefore, progress in Neuroscience relies to a large extent on the ability to record simultaneously from large populations of cells. The implementation of optimal spike sorting algorithms is a critical step forwards in this direction, since it can allow the analysis of the activity of a few close-by neurons from each recording electrode. This opens a whole spectrum of new possibilities. For example, it is possible to study connectivity patterns of close-by neurons or to study the topographical organization of a given area and discriminate the responses of nearby units. It is also possible to have access to the activity of sparsely firing neurons, whose responses may be completely masked by other neurons with high firing rates, if not properly sorted. Separating sparsely firing neurons from a background of large multi-unit activity is not an easy job, but this type of neurons can show striking responses.

In principle, the easiest way to separate spikes corresponding to different neurons is to use an amplitude discriminator. This classification can be very fast and simple to implement on-line. However, sometimes spikes from different neurons may have the same peak amplitude but different shapes. Then, a relatively straightforward improvement is to use ‘windows discriminators’, which assign the spikes crossing one or several windows to the same neuron. This method is implemented in commercial acquisition systems and it is still one of the most preferred ways to do spike sorting. Window discriminators can be implemented on-line, but have the main disadvantage that they require a manual setting of the windows by the user, which may need readjustment during the experiment. For this reason it is in practice not possible to sort spikes of more than a few channels simultaneously with window discriminators1. Another major drawback of this approach is that in many cases spike shapes overlap and it is very difficult to set up windows that will discriminate them. This, of course, introduces a lot of subjectivity in the clustering procedure. Moreover, it is possible that sparsely firing neurons may be missed, especially if the particular input (or a particular behavior) that elicits the firing of the neuron is not present while the windows are set.

Another simple strategy to do spike sorting is to select a characteristic spike shape for each cluster and then assign the remaining spikes using template matching. This method was pioneered by Gerstein and Clark, who implemented an algorithm in which the user selects the templates and the spikes are assigned based on a mean square distance metric. This procedure can be also implemented on-line, but, as the window discriminator, it has a few drawbacks. First, it requires user intervention, which makes it not practical for large number of channels. Second, the templates may have to be adjusted during the experiment2. Third, when the spike shapes overlap it may not be straightforward to choose the templates or to decide how many templates should be taken. Fourth, as for the window discriminator, it may miss sparsely firing neurons.

Current acquisition systems allow the simultaneous recording of up to hundreds of channels simultaneously. This opens the fascinating opportunity to study large cell populations to understand how they encode sensory processing and behavior. The reliability of these data critically depends on accurately identifying the activity of the individual neurons with spike sorting. To deal with large number of channels, supervised methods as the ones described in this section are highly time consuming, subjective, and nearly impossible to be used in the course of an experiment. It is therefore clear that there is a need for new methods to deal with recordings from multiple electrodes. Surprisingly, the development of such methods has been lagging far behind the capabilities of current hardware acquisition systems. There are three main characteristics that these methods should have: i) They should give significant improvements (in terms of reliability or automatization) over a simple window discriminator or a template matching algorithm. Otherwise, their use seems not justified. ii) They should be unsupervised or at least they should offer a hybrid approach where most of the calculations are done automatically and the user intervention may be required only for a final step of manual supervision or validation. iii) They should be fast enough to give results in a reasonable time and, eventually, it should be possible to implement them on line.

CHAPTER -2

LITERATURE REVIEW:

The detection of neural spike activity is a technical challenge that is a prerequisite for studying many types of brain function. Measuring the activity of individual neurons accurately can be difficult due to large amounts of background noise and the difficulty in distinguishing the action potentials of one neuron from those of others in the local area. This article reviews algorithms and methods for detecting and classifying action potentials, a problem commonly referred to as spike sorting. The article first discusses the challenges of measuring neural activity and the basic issues of signal detection and classification. It reviews and illustrates algorithms and techniques that have been applied to many of the problems in spike sorting and discusses the advantages and limitations of each and the applicability of these methods for different types of experimental demands. The article is written both for the physiologist wanting to use simple methods that will improve experimental yield and minimize the selection biases of traditional techniques and for those who want to apply or extend more sophisticated algorithms to meet new experimental challenges.

K-Means Clustering

Clustering is the latest segmentation technique for the purpose of the data analysis in the form of an image or in the form of a signal that is in the form of the waveform. Image segmentation is an image analysis process that aims at partitioning an image into several regions according to a homogeneity criterion. Segmentation can be a fully automatic process, but it achieves its best results with semi-automatic algorithms, i.e. algorithms that are guided by a human operator. Most of the existing segmentation algorithms are highly specific to a certain type of data, and some research is pursued to develop generic frameworks integrating these techniques. Actually the word segmentation is nothing but the partitioning depending on the given algorithm. So therefore now the clustering is the latest segmentation technique in which the segmentation takes place depending on the similarity.

Therefore clustering is nothing but proper placing of the group ofsimilar observations into the sub sets. Image segmentation is a very complex task, which benefits from computer assistance, and yet no general algorithm exists. This concept of semi-automatic process naturally involves an environment in which the human operator will interact with the algorithms and the data in order to produce optimal segmentations. In the medical field image segmentation has become of the essential tool for the purpose of the accurate processing. The latest part of the image segmentation is the clustering in which it allows the doctors to test the data accurately for the purpose of the diagnosis.

Clustering is nothing but the grouping of the many computers and forming them in to a single computer. That is clustering is nothing but the dividing of the image or the signal and grouping of the similar data elements in to the similar values by naming the sets or the clusters. So finally cluster is nothing but consisting of data of the similar values or the respective elements. The term clustering can also be defined as whose members are similar in some way and these similar members are grouped together or bind together and termed as the cluster. That is the collection of the similar members are bind a side as a group and termed that as a cluster.

Clustering is nothing but the collection of the unlabelled data and binding them in to one particular structure and that structure is termed as the cluster. Therefore a cluster is nothing but the collection of the similar objects in which the objects are similar are similar to each other in their respective cluster and they are different from the other set of the values or the clusters. And therefore the graphical representation of the clustering is shown in the following diagram:

Figure show the example of the clustering approach for the given membership elements

So therefore from the above example we can conclude that there the members are divided depending on their similarity and grouped them in to the four clusters in which the each cluster contains all the similar members in which they are inter related to one another and forming them in to the groups depending on the similarity and the members of the other clusters are not inter related to one another. So finally we conclude that from these example four sets or the clusters are different from one another.

So one important example for the determination of the clustering is for example there are 52 data elements and the maximum number of possibility for the formation of the clusters is we can form maximum 52 number of the clusters but the main drawback of the formation of the large number of the clusters is it loose its accuracy and also it’s a huge time consuming process. So in order to overcome its drawback we go for the less number of the possibility and satisfying in terms of the time as well as the accuracy. So therefore if there are 52 numbers of the elements and dividing in to 4 clusters it may be an easy task for the process of the members in the terms of the accuracy as well as the time consuming process. And after grouping those into the clusters there are the two important tasks one is to find the centroid and the other one is the distance. Centroid is nothing but the distance between the center value and the respective element. Secondly distance is nothing but the measurement of the distance between the respective value and the neighboring value. This is nothing but to check in which the elements are inter related to one another. And therefore the distance measurement can be shown or illustrated by the following example. In which the distance measurement takes place depending on the minute accuracy.

Figure shows the calculation of the distance between the membership values or the membership elements

Distance measurements between the data points are the important task that is analyzing from the cluster. If the components are all interrelated to one another in the same cluster then there is a possibility that there is a requirement of the small Euclidean distance for the formation of the cluster and also for the finding of the distance in between them. The problem arises from the mathematical formula which used to combine the distances between the single components of the data feature vectors into a single unique distance measure which may be used for the purpose of the clustering. Even in the odd cases or in the exceptional cases the Euclidean distance may also mislead. So all the things are taken in to the consideration.

An ideal medical image segmentation scheme should possess some preferred properties such as minimum user interaction, fast computation, and accurate and robust segmentation results. Medical image segmentation plays an instrumental role in clinical diagnosis. In image segmentation, one challenge is how to deal with the nonlinearity of real data distribution, which often makes segmentation methods need more human interactions and make unsatisfied segmentation results.

There are numerous types of classifications proposed in the specialized literature, each of which is relevant respectively to the point of view required by the study. Since this research project deals with medical image segmentation, where a large majority of the acquired data is grey-scaled, and all the techniques concerning color images will be left aside. The techniques are categorized into three main families:

• Edge based techniques.

• Region based techniques.

• Pixel based techniques

So now therefore we have a brief overview on these following techniques. The classification can be encountered

Neural networks segmentation. Region growing. Clustering. Probabilistic and Bayesian approaches. Tree/graph based approaches. Edge based segmentation. Histogram thresholding

Actually histogram is nothing but the graphical representation of the pixel values. It is the simplest technique in which on comparing to the pixel based family. Histogram thresholding is considered in finding an acceptable threshold in the grey levels of the input image in order to separate the object(s) from the background. This kind of histogram is sometimes referred to as bimodal. Since the grey-levels histogram of an ideal image will clearly show two distinct peaks assailable to Gaussians representing the grey levels with respect to the background. Gaussian filtering is one of the methods for the finding of the threshold value from an image. And its graphical or the diagrammatical representation is given as follows

Edge-based segmentation

The simplest method of this type is known as detect and link. The algorithm first tries to detect local discontinuities and then tries to build longer ones by connecting them, hopefully leading to closed boundaries which circumscribe the objects in the image. The edge-based family of techniques tries to detect edges in an image so that the boundaries of the objects can be inferred. As a consequence, the image will not sharply split into regions. Some improvements for this method have been proposed in order to overcome this type of issue.

Region-based segmentation

Image region belonging to an object generally have homogeneous characteristics, The region growing algorithms start from well chosen seeds. They then expand the seed regions by annexing their homogeneous neighbors. The process is iterated until all the pixels in the image have been classified. The region-based family of techniques fundamentally aims at iteratively building regions in the image until a certain level of stability is reached. The region splitting algorithms use the entire image as a seed and split it into Regions until no more heterogeneity can be found. The shape of an object can be described in terms of its boundary or the region it occupies.

Other region-based segmentation techniques

Split-and-merge type Watershed type Split and merge type:

In split-and-merge technique the process initiates as follows, In the first stage an image is first split into many small regions during the splitting stage as per the given criteria, and then the regions are merged depending on the enough similarity to produce the final segmentation.

Watershed-based segmentation:

In watershed-based segmentation the process initiates in the following way, the gradient magnitude image is considered as a topographic relief where the physical elevation represents the brightness value of each voxel correspondingly. An immersion type of approach is used for the calculation of the watersheds. The procedure results in a partitioning of the image in many catchment basins of which the borders define the watersheds. To reduce over-segmentation, the image is smoothed by 3D adaptive anisotropic diffusion prior to watershed operation. Semi-automatic merging of volume primitives returned by the watershed operation is then used to produce the final segmentation.

The operation can be described by imagine that holes are pierced in each local minimum of the topographic relief. Then, the surface is slowly immersed in water, which causes a flooding of all the catchment basins, starting from the basin associated with the global minimum. As soon as two catchment basins begin to merge, a dam is built.

In image processing and photography, The representation of the distribution of colors in an image is termed as a color image. For digital images, a color histogram represents the number of pixels that have colors in each of a fixed list of color ranges, that span the image’s color space, the set of all possible colors.

Color histograms are flexible constructs that can be built from images in various color spaces, whether RGB, rg chromaticity or any other color space of any dimension. The color histogram is a statistic that can be viewed as an approximation of an underlying continuous distribution of colors values. The color histogram can be built different kinds of color space images, although the term is more often used for three-dimensional representations or the three dimensional color space representations like RGB, NTSC or HSV . The term intensity histogram may be used instead of the monochromatic images. For multi-spectral images, where each pixel is represented by an arbitrary number of measurements (for example, beyond the three measurements in RGB), the color histogram is N-dimensional, with N being the number of measurements taken. Each measurement has its own range of the light spectrum of different wavelengths, some of which may lie around or the outside the visible spectrum.

If the possible set of color values is sufficiently small, each of those colors may be placed on a range by itself; then the histogram is merely the count of pixels that have each possible color. Most often, the space is divided into an appropriate number of ranges, often arranged as a regular grid, each containing many similar color values. The color histogram may also be represented and displayed as a smooth function defined over the color space that approximates the pixel counts.

A histogram of an image is produced first by discretization of the colors in the image into a number of bins, and counting the number of image pixels in each bin. For example, a Red–Blue chromaticity histogram can be formed by first normalizing color pixel values by dividing RGB values by R+G+B, then quantizing the normalized R and B coordinates into N bins each.

In existing system the standard histograms are used, because of their efficiency and insensitivity to small changes, Standard histograms are widely used for content based image retrieval. But the main disadvantage of histograms is that many images of different appearances can have similar histograms because histograms provide coarse characterization of an image.

Histogram refinement method further refines the histogram by splitting the pixels in a given bucket into several classes and producing the comparison graph of 8-bin (bucket) and 16 bins.Histogram refinement provides a set of features for proposed for Content Based Image Retrieval (CBIR).

In this module first the RGB image is changed to grayscale image, also known as the intensity image, which is a single 2-D matrix containing values from 0 to 255. After the conversion from RGB to grayscale image, we perform quantization to reduce the number of levels in the image. We reduce the 256 levels to 16 levels in the quantized image by using uniform quantization. The segmentation is done by using color histograms. The most important unsupervised learning problem considered as the clustering because for any of the objects no information is provided about the “right answer”. Cluster analysis allows many choices about the nature of the algorithm for combining groups. It finds a reasonable structure in the data set depending on the classification on a set of observations in the data. neither the number of clusters nor the rules of assignment into clusters are known that is here the priori information about the data is not at all required. There are the two basic approaches for the purpose of the clustering that is one is called as the supervised data and the other one is the unsupervised data. If the input contains the labels then it is called as the supervisory data if the input doesn’t contain any form of the labels then it is termed as the unsupervised data. Clustering is nothing but the binding of the data with the similar characteristics. The similarity of objects is used for the division of the data into the groups. So therefore for the finding of the similarity of the objects we are going to use the distance function as the main criterion in the data set.

The functions which are performed by the classifier methods without the use of the training data can be performed by the clustering algorithms. To compensate for the lack of training data, clustering methods alternate between segmenting the image and characterizing the properties of each class. In a sense, clustering methods train themselves, using the available data.

The K-means clustering algorithm clusters data by iteratively computing a mean intensity for each class and segmenting the image by classifying each pixel in the class with the closest mean. Although clustering algorithms do not require training data, they do require an initial segmentation (or, equivalently, initial parameters). The posterior probabilities and computing maximum likelihood estimates of the means, covariance’s, and mixing coefficients of the mixture model. Similar to the classifier methods, clustering algorithms are not directly incorporate spatial modeling and can therefore be affected by the noise and intensity in homogeneities. It can be illustrated by the following example as follows:

Figure below shows the brain region which is effected by the tumor region

The number of classes was assumed to be three, representing (from dark gray to white) cerebrospinal fluid, gray matter, and white matter. The fuzzy c-means algorithm generalizes the K-means algorithm, allowing for soft segmentations based on fuzzy set theory. The EM algorithm applies the same clustering principles with the underlying assumption that the data follow a Gaussian mixture model.

The main aim of the K-means clustering is to form the ‘k’ clusters from the ‘m’ number of the data elements. Where in the ‘k’ clusters the data or the members are should be similar to one another. And after the finding of the ‘k’ number of the clusters then the cross check takes place that is the finding of the centroid and the distance takes place. Centroid is nothing but the finding the distance between the centre of the data or the centre of the membership value and the relative value or the respective value. Then coming towards the distance it is nothing but the finding of the distance between the neighboring pixel values with respect to the original membership value. Distance is the term it is nothing but used for the finding of the correlation in between the pixel values. That is also nothing but for the finding of the similarity in between the pixel values.

CLUSTER ANALYSIS

Cluster analysis or clustering is the assignment of a set of observations into subsets (called clusters) so that observations in the same cluster are similar in some sense. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics.

A cluster is a collection of data objects that are similar to one another within the same cluster and are dissimilar to the objects in other clusters. Cluster analysis has been widely used in numerous applications, including pattern recognition, data analysis, image processing, and market research. By clustering, one can identify dense and sparse regions and therefore, discover overall distribution patterns and interesting correlations among data attributes.

As a branch of statistics, cluster analysis has been studied extensively for many years, focusing mainly on distance-based cluster analysis. Cluster analysis tools based on k-means, k-medoids, and several other methods have also been built into many statistical analysis software packages or systems, such as S-Plus, SPSS, and SAS. In machine learning, clustering is an example of unsupervised learning. Unlike classification, clustering and unsupervised learning do not rely on predefined classes and class-labeled training examples. For this reason, clustering is a form of learning by observation, rather than learning by examples. In conceptual clustering, a group of objects forms a class only if it is describable by a concept. This differs from conventional clustering, which measures similarity based on geometric distance. Conceptual clustering consists of two components: (1) it discovers the appropriate classes, and (2) it forms descriptions for each class, as in classification. The guideline of striving for high intra class similarity and low interclass similarity still applies.

An important step in most clustering is to select a distance measure, which will determine how the similarity of two elements is calculated. This will influence the shape of the clusters, as some elements may be close to one another according to one distance and farther away according to another. For example, in a 2-dimensional space, the distance between the point (x = 1, y = 0) and the origin (x = 0, y = 0) is always 1 according to the usual norms, but the distance between the point (x = 1, y = 1) and the origin can be 2, or 1 if you take respectively the 1-norm, 2-norm or infinity-norm distance.

Common distance functions: The Euclidean distance (also called distance as the crow flies or 2-norm distance). A review of cluster analysis in health psychology research found that the most common distance measure in published studies in that research area is the Euclidean distance or the squared Euclidean distance. The Manhattan distance The maximum norm The Mahalanobis distance corrects data for different scales and correlations in the variables The angle between two vectors can be used as a distance measure when clustering high dimensional data. The Hamming distance measures the minimum number of substitutions required to change one member into another.

Another important distinction is whether the clustering uses symmetric or asymmetric distances. Many of the distance functions listed above have the property that distances are symmetric.

Future Enhancement ?This classification of feature set can be enhanced to heterogeneous (shape, texture) so that we can get more accurate result. ?It can also enhance to merging of heterogeneous features and neural network. ?The schemes proposed in this work can be further improved by introducing fuzzy logic concepts into the clustering process. CENTROID-BASED TECHNIQUE: THE K-MEANS METHOD

The k-means algorithm takes the input parameter, k, and partitions a set of n objects into k clusters so that the resulting intra cluster similarity is high but the inter cluster similarity is low. Cluster similarity is measured in regard to the mean value of the objects in a cluster, which can be viewed as the cluster’s center of gravity. “How does the k-means algorithm work” The k-means algorithm proceeds as follows. First, it randomly selects k of the objects, each of which initially represents a cluster mean or center. For each of the remaining objects, an object is assigned to the cluster to which it is the most similar, based on the distance between the object and the cluster mean. It then computes the new mean for each cluster. This process iterates until the criterion function converges. Typically, the squared-error criterion is used, defined as

Where E is the sum of square-error for all objects in the database, p is the point in space representing a given object, and mi is the mean of cluster ci (both p and mi are multidimensional). This criterion tries to make the resulting k clusters as compact and as separate as possible.

The algorithm attempts to determine k partitions that minimize the squared error function. It works well when the clusters are compact clouds that are rather well separated from one another. The method is relatively scalable and efficient in processing large data sets because the computational complexity of the algorithm is O, where n is the total number of objects, k is the number of clusters, and t is the number of iterations. Normally, k << n and t << n. The method often terminates at a local optimum.

The k-means method, however, can be applied only when the mean of a cluster is defined. This may not be the case in some applications, such as when data with categorical attributes are involved. The necessity for users to specify k, the number of clusters, in advance can be seen as a disadvantage. The k-means method is not suitable for discovering clusters with non convex shapes or clusters of very different size. Moreover, it is sensitive to noise and outlier data points since a small number of such data can substantially influence the mean value Suppose that there is a set of objects located in space, Let k = 2; that is, the user would like to cluster the objects into two clusters.

According to the algorithm, we arbitrarily choose two objects as the two initial cluster centers, where cluster centers are marked by a “+”. Each object is distributed to a cluster based on the cluster center to which it is the nearest. Such a distribution forms silhouettes encircled by dotted curves, as shown in Fig. This kind of grouping will update the cluster centers. That is, the mean value of each cluster is recalculated based on the objects in the cluster. Relative to these new centers, objects are redistributed to the cluster domains based on which cluster center is the nearest. Such redistribution forms new silhouettes encircled by dashed curves, as shown in Fig. Eventually, no redistribution of the objects in any cluster occurs and so the process terminates. The resulting clusters are returned by the clustering process.

K-MENS CLUSTERING ALGORITHM

Algorithm: k-means. The k-means algorithm for partitioning based on the mean value of the objects in the cluster.

Input: The number of clusters k and a database containing n objects.

Output: A set of k clusters that minimizes the squared-error criterion.

Method: arbitrarily choose k objects as the initial cluster centers: repeat (re)assign each object to the cluster to which the object is the most similar, based on the mean value of the objects in the cluster; Update the cluster means, i.e., calculate the mean value of the objects for each cluster; Until no change.

The purpose of K-mean clustering is to classify the data. We selected K-means clustering because it is suitable to cluster large amounts of data. K-means creates a single level of clusters unlike hierarchical clustering method’s tree structure. Each observation in the data is treated as an object having a location in space and a partition is found in which objects within each cluster are as close to each other as possible, and as far from objects in other clusters as possible. Selection of distance measure is an important step in clustering. Distance measure determines the similarity of two elements. It greatly influences the shape of the clusters, as some elements may be close to one another according to one distance and further away according to another. We selected to use quadratic distance measure which provides the quadratic between the various features. We calculated the distance between all the row vectors of our feature set obtained from previous section, hence finding similarity between every pair of objects in the data set. The result is a distance matrix.

Next, we used the member objects and the centroid to define each cluster. The centroid for each cluster is the point to which the sum of distances from all objects in that cluster is minimized. The distance information generated above is utilized to determine the proximity of objects to each other. The objects are grouped into K clusters using the distance between the centroids of the two groups. Let Op is the number of objects in cluster p and Oq is the number of objects in cluster q, dpi is the (i)thobject in cluster pand dqjis the jth object in cluster q. The centroid distance between the two clusters p and q is given as:

Where

VHDL Design entities and configurations

The design entityis the primary hardware abstraction in VHDL. It represents a portion of a hardware design that has well-defined inputs and outputs and performs a well-defined function. A design entity may represent entire system, a subsystem, a board, a chip, a macro-cell, a logic gate, or any level of abstraction in between. A configuration can be used to describe how design entities are put together to form a complete design.

A design entity may be described in terms of a hierarchy of blocks, each of which represents a portion of the whole design. The top-level block in such a hierarchy is the design entity itself; such a block is an external block that resides in a library and may be used as a component of other designs. Nested blocks in the hierarchy are internal blocks, defined by block statements

Entity declarations:

An entity declaration defines the interface between a given design entity and the environment in which it issued. It may also specify declarations and statements that are part of the design entity. A given entity declaration may be shared by many design entities, each of which has a different architecture. Thus, an entity declaration can potentially represent a class of design entities, each with the same interface.

entity_declaration ::=

entity identifier is

entity_header

entity_declarative_part

[begin

entity_statement_part ]

end [ entity ] [ entity_simple_name ] ;

Generics:

Generics provide a channel for static information to be communicated to a block from its environment. The following applies to both external blocks defined by design entities and to internal blocks defined by block

statements.

generic_list ::= generic_interface_list

The generics of a block are defined by a generic interface list. Eachinterface element in such a generic interface list declares a formal generic.

Ports:

Ports provide channels for dynamic communication between a block and its environment.

port_list ::= port_interface_list

Architecture bodies:

An architecture body defines the body of a design entity. It specifies the relationships between the inputs and outputs of a design entity and may be expressed in terms of structure, dataflow, or behavior. Such specifications may be partial or complete.

architecture_body ::=

architecture identifier of entity_name is

architecture_declarative_part

Begin

architecture_statement_part

end [ architecture ] [ architecture_simple_name ] ;

Subprograms and Package

Subprogram declarations:

A subprogram declaration declares a procedure or a function, as indicated by the appropriate reserved word.

subprogram_declaration ::=

subprogram_specification ;

subprogram_specification ::=

procedure designator [ ( formal_parameter_list ) ]

| [ pure | impure ] function designator [ ( formal_parameter_list ) ]

return type_mark

The specification of a procedure specifies its designator and its formal parameters (if any). The specification of a function specifies its designator, its formal parameters (if any), the subtype of the returned value (the result subtype), and whether or not the function is pure. A function is impure if its specification contains the reserved word impure; otherwise, it is said to be pure. A procedure designator is always an identifier. A function designator is either an identifier or an operator symbol

Subprogram bodies:

A subprogram body specifies the execution of a subprogram.

subprogram_body ::=

subprogram_specification is

subprogram_declarative_part

begin

subprogram_statement_part

end [ subprogram_kind ] [ designator ] ;

Package declarations:

A package declaration defines the interface to a package. The scope of a declaration within a package can be extended to other design units.

package_declaration ::=

package identifier is

package_declarative_part

end [ package ] [ package_simple_name ] ;

Package bodies

A package body defines the bodies of subprograms and the values of deferred constants declared in the interface to the package.

package_body ::=

package body package_simple_name is

package_body_declarative_part

end [ package body ] [ package_simple_name ] ;

Data Types: Scalar Types:

Scalar type can be classified into four types. They are

— Enumeration

— Integer

— Physical

— Floating Point

Enumeration types:

An enumeration type definition defines an enumeration type.

enumeration_type_definition ::=

( enumeration_literal { , enumeration_literal } )

enumeration_literal ::= identifier | character_literal

Integer types:

An integer type definition defines an integer type whose set of values includes those of the specified range.

integer_type_definition ::= range_constraint.

Physical types:

Values of a physical type represent measurements of some quantity. Any value of a physical type is an integral multiple of the primary unit of measurement for that type.

physical_type_definition ::=

range_constraint

units

primary_unit_declaration

{ secondary_unit_declaration }

end units [ physical_type_simple_name ]

Floating point types:

Floating point types provide approximations to the real numbers. Floating point types are useful for models in which the precise characterization of a floating point calculation is not important or not determined.

floating_type_definition ::= range_constraint

Composite types:

Composite types are used to define collections of values. These include both arrays of values (collections of values of a homogeneous type) and records of values (collections of values of potentially heterogeneous types).

Array types

An array object is a composite object consisting of elements that have the same subtype. The name for an element of an array uses one or more index values belonging to specified discrete types. The value of an array object is a composite value consisting of the values of its elements

unconstrained_array_definition ::=

array ( index_subtype_definition { , index_subtype_definition } )

of element_subtype_indication

constrained_array_definition ::=

array index_constraint of element_subtype_indication

Record types:

A record type is a composite type, objects of which consist of named elements. The value of a record object is a composite value consisting of the values of its elements.

record_type_definition ::=

record

element_declaration

{ element_declaration }

end record [ record_type_simple_name ]

Access types:

An object declared by an object declaration is created by the elaboration of the object declaration and is denoted by a simple name or by some other form of name. In contrast, objects that are created by the evaluation of allocators (see 7.3.6) have no simple name. Access to such an object is achieved by an access value returned by an allocator; the access value is said to designate the object.

access_type_definition ::= access subtype_indication

File types

A file type definition defines a file type. File types are used to define objects representing files in the host system environment. The value of a file object is the sequence of values contained in the host system file.

file_type_definition ::= file of type_mark

Data Objects:

Object declarations

An object declaration declares an object of a specified type. Such an object is called an explicitly declared object.

Constant declarations

A constant declaration declares a constant of the specified type. Such a constant is an explicitly declared constant.

constant_declaration ::=

constant identifier_list : subtype_indication [ := expression ] ;

If the assignment symbol “:=” followed by an expression is present in a constant declaration, the expression specifies the value of the constant; the type of the expression must be that of the constant. The value of a constant cannot be modified after the declaration is elaborated.

Signal declarations

A signal declaration declares a signal of the specified type. Such a signal is an explicitly declared signal.

signal_declaration ::=

signal identifier_list : subtype_indication [ signal_kind ] [ := expression ] ;

signal_kind ::= register | bus

Variable declarations

A variable declaration declares a variable of the specified type. Such a variable is an explicitly declared variable.

variable_declaration ::=

[ shared ] variable identifier_list : subtype_indication [ := expression ] ;

File declarations

A file declaration declares a file of the specified type. Such a file is an explicitly declared file.

file_declaration ::=

file identifier_list : subtype_indication [ file_open_information ] ;

Operators:

Logical Operators:

The logical operators and, or, nand, nor, xor, xnor, and not are defined for predefined types BIT and BOOLEAN. They are also defined for any one-dimensional array type whose element type is BIT or BOOLEAN. For the binary operators and, or, nand, nor, xor, and xnor, the operands must be of the same base type. Moreover, for the binary operators and, or, nand, nor, xor, and xnor defined on one-dimensional array types, the operands must be arrays of the same length, the operation is performed on matching elements of the arrays, and the result is an array with the same index range as the left operand.

Relational Operators.

Relational operators include tests for equality, inequality, and ordering of operands. The operands of each relational operator must be of the same type. The result type of each relational operator is the predefined type BOOLEAN.

Operator

Operation

Operand Type

Result Type= EqualityAny Type Boolean /= InequalityAny Type Boolean< Less ThanAny ScalarType or Descrete type

Boolean<= Less Than or EqualAny ScalarType or Descrete type

Boolean >GreaterThanAny ScalarType or Descrete type

Boolean

>=Greater Than or

EqualAny ScalarType or Descrete type

Boolean

Shift Operators.

The shift operators sll, srl, sla, sra, rol, and ror are defined for any one-dimensional array type whose element type is either of the predefined types BIT or BOOLEAN.

Operator

OperationLeft operand

TypeRight operand

TypeResult

type

sll

Shift left

LogicalAny one-dimensional

array type whose element

type is BIT or BOOLEAN

INTEGER

Same as left

srl

Shift right

LogicalAny one-dimensional

array type whose element

type is BIT or BOOLEAN

INTEGER

Same as left

sla

Shift left arithmeticAny one-dimensional

array type whose element

type is BIT or BOOLEAN

INTEGER

Same as left

sraShift right arithmeticAny one-dimensional

array type whose element

type is BIT or BOOLEANINTEGER

Same as left

rolRotate left

LogicalAny one-dimensional

array type whose element

type is BIT or BOOLEANINTEGER

Same as left

ror

Rotate right

LogicalAny one-dimensional

array type whose element

type is BIT or BOOLEAN

INTEGER

sSame as left

Adding Operators.

The adding operators + and ? are predefined for any numeric type and have their conventional mathematical meaning. The concatenation operator & is predefined for any one-dimensional array type.

Operator

Operation Left operand

TypeRight operand

TypeResult

Type +AdditionAny numeric type

Same type Same type –SubtractionAny numeric type

Same typeSame type

&

ConcatenationAny array type

Same array typeSame array type Any array typeSame element typeSame array typeThe element type Any array typeSame array typeThe element typeAny element

typeAny array type

Multiplying Operators:

The operators * and / are predefined for any integer and any floating point type and have their conventional mathematical meaning; the operators mod and rem are predefined for any integer type. For each of these operators, the operands and the result are of the same type.

Operator

Operation

Left operand

Type

Right operand

Type

Result

Type

*

Multiplication

Any integertype

Same type

Same type

Any floating point type

Same type

Same type

/

Division

Any integer type

Same type

Same type

Any floating point type

Same type

Same type

mod

Modulus

Any integer type

Same type

Same type

rem

Remainde

Any integer type

Same type

Same type

Miscellaneous operators:

The unary operator abs is predefined for any numeric type.

OperatorOperation Operand type Result type absAbsolute valueAny numeric typeSame numeric type

The exponentiating operator ** is predefined for each integer type and for each floating point type. In either case the right operand, called the exponent, is of the predefined type INTEGER.

Operator

OperationLeft operand

Type Right operand

TypeResult

Type

**

ExponentiationAny integer typeINTEGERSame as leftAny floating point typeINTEGERSame as left

In VHDL mainly there are three types modeling styles.These are

1.Behaviorial Modeling.

2. Data FlowModeling.

3. Structural Modeling.

Behaviorial Modeling: Process statement

A process statement defines an independent sequential process representing the behavior of some portion of thedesign.

process_statement ::=

[ process_label : ]

[ postponed ] process [ ( sensitivity_list ) ] [ is ]

process_declarative_part

begin

process_statement_part

end [ postponed ] process [ process_label ] ;

where the sensitivity list of the wait statement is that following the reserved word process. Such a process statement must not contain an explicit wait statement. Similarly, if such a process statement is a parent of a procedure, then that procedure may not contain a wait statement.

Sequential statements:

The various forms of sequential statements are described in this section. Sequential statements are used to define algorithms for the execution of a subprogram or process; they execute in the order in which they appear.

Wait statement

The wait statement causes the suspension of a process statement or a procedure.

wait_statement ::=

[ label : ] wait [ sensitivity_clause ] [ condition_clause ] [ timeout_clause ] ;

sensitivity_clause ::= on sensitivity_list

sensitivity_list ::= signal_name { , signal_name }

condition_clause ::= until condition

condition ::= boolean_expression

timeout_clause ::= for time_expression

Assertion statement:

An assertion statement checks that a specified condition is true and reports an error if it is not.

assertion_statement ::= [ label : ] assertion ;

assertion ::=

assert condition

[ report expression ]

[ severity expression ]

Report statement:

A report statement displays a message.

report_statement ::=

[ label : ]

report expression

[ severity expression ]

If statement:

An if statement selects for execution one or none of the enclosed sequences of statements, depending on the value of one or more corresponding conditions.

if_statement ::=

[ if_label : ]

if condition then

sequence_of_statements

{ elsif condition then

sequence_of_statements }

[ else

sequence_of_statements ]

end if [ if_label ] ;

If a label appears at the end of an if statement, it must repeat the if label.

For the execution of an if statement, the condition specified after if, and any conditions specified after elseif, are evaluated in succession (treating a final else as elsif TRUE then) until one evaluates to TRUE or all conditions are evaluated and yield FALSE. If one condition evaluates to TRUE, then the corresponding sequence of statements is executed; otherwise, none of the sequences of statements is executed.

Case statement:

A case statement selects for execution one of a number of alternative sequences of statements; the chosen alternative is defined by the value of an expression.

case_statement ::=

[ case_label : ]

case expression is

case_statement_alternative

{ case_statement_alternative }

end case [ case_label ] ;

case_statement_alternative ::=

when choices =>

sequence_of_statements

The expression must be of a discrete type, or of a one-dimensional array type whose element base type is a character type. This type must be determinable independently of the context in which the expression occurs, but using the fact that the expression must be of a discrete type or a one-dimensional character array type. Each choice in a case statement alternative must be of the same type as the expression; the list of choices specifies for which values of the expression the alternative is chosen.

Loop statement:

A loop statement includes a sequence of statements that is to be executed repeatedly, zero or more times.

loop_statement ::=

[ loop_label : ]

[ iteration_scheme ] loop

sequence_of_statements

end loop [ loop_label ] ;

iteration_scheme ::=

while condition

| for loop_parameter_specification

parameter_specification ::=

identifier in discrete_range

Next statement:

A next statement is used to complete the execution of one of the iterations of an enclosing loop statement (called loop in the following text). The completion is conditional if the statement includes a condition.

next_statement ::=

[ label : ] next [ loop_label ] [ when condition ] ;

Exit statement:

An exit statement is used to complete the execution of an enclosing loop statement (called loop in the following text). The completion is conditional if the statement includes a condition.

exit_statement ::=

[ label : ] exit [ loop_label ] [ when condition ] ;

Return statement

A return statement is used to complete the execution of the innermost enclosing function or procedure body

.return_statement ::=

[ label : ] return [ expression ] ;

Null statement

A null statement performs no action.

null_statement ::=

[ label : ] null ;

Data Flow Modeling:

The various forms of concurrent statements are described in this section. Concurrent statements are used to define interconnected blocks and processes that jointly describe the overall behavior or structure of a design. Concurrent statements execute asynchronously with respect to each other.

Block statement:

A block statement defines an internal block representing a portion of a design. Blocks may be hierarchically nested to support design decomposition.

block_statement ::=

block_label :

block [ ( guard_expression ) ] [ is ]

block_header

block_declarative_part

begin

block_statement_part

end block [ block_label ] ;

If a guard expression appears after the reserved word block, then a signal with the simple name GUARD of predefined type BOOLEAN is implicitly declared at the beginning of the declarative part of the block, and the guard expression defines the value of that signal at any given time (see 12.6.4). The type of the guard expression must be type BOOLEAN. Signal GUARD may be used to control the operation of certain statements within the block (see 9.5).

Concurrent procedure call statements:

A concurrent procedure call statement represents a process containing the corresponding sequential procedure call statement.

concurrent_procedure_call_statement ::=

[ label : ] [ postponed ] procedure_call ;

For any concurrent procedure call statement, there is an equivalent process statement. The equivalent process statement is a postponed process if and only if the concurrent procedure call statement includes the reserved word postponed.

Concurrent assertion statements:

A concurrent assertion statement represents a passive process statement containing the specified assertion statement.

concurrent_assertion_statement ::=

[ label : ] [ postponed ] assertion ;

Concurrent signal assignment statements

A concurrent signal assignment statement represents an equivalent process statement that assigns values to signals.

concurrent_signal_assignment_statement ::=

[ label : ] [ postponed ] conditional_signal_assignment

| [ label : ] [ postponed ] selected_signal_assignment

Conditional signal assignments:

The conditional signal assignment represents a process statement in which the signal transform is an if statement.

target <= options waveform1 when condition1 else

waveform2 when condition2 else

waveform3 when condition3 else

————–

—————

waveformN-1 when condition-1 else

waveformN when conditionN;

Selected signal assignments:

The selected signal assignment represents a process statement in which the signal transform is a case statement.

with expression select

target <= options waveform1 when choice_list1 ,

waveform2 when choice_list2 ,

waveform3 when choice_list3,

————–

—————

waveformN-1 when choice_listN-1,

waveformN when choice_listN ;

Structural Modeling: Component declarations:

A component declaration declares a virtual design entity interface that may be used in a component instantiation statement. A component configuration or a configuration specification can be used to associate a component instance with a design entity that resides in a library.

component_declaration ::=

component identifier [ is ]

[ local_generic_clause ]

[ local_port_clause ]

end component [ component_simple_name ] ;

Each interface object in the local generic clause declares a local generic. Each interface object in the local port clause declares a local port.If a simple name appears at the end of a component declaration, it must repeat the identifier of the component declaration.

Component instantiation statements:

A component instantiation statement defines a subcomponent of the design entity in which it appears, associates signals or values with the ports of that subcomponent, and associates values with generics of that subcomponent. This subcomponent is one instance of a class of components defined by a corresponding component declaration, design entity, or configuration declaration.

component_instantiation_statement ::=

instantiation_label :

instantiated_unit

[ generic_map_aspect ]

[ port_map_aspect ] ;

instantiated_unit ::=

[ component ] component_name

| entity entity_name [ ( architecture_identifier ) ]

| configuration configuration_name

FPGA (Field programmable gate array)

FPGA is nothing but the field programmable gate arary. Irt is a ki8nd of the integrated circuit it is designed depending on the users choice and functionally used for the purpose of the operability. Actually in this FPGA (Field programmable gate array) there we are going to use the HDL (Hardware description language) for the purpose of the operability. Theb functionality of the FPGA is similar to that of the appliaction specific integrated circuit (ASIC) and these circuit diagrams are used to specify the circuit diagrams and these FPGA consists of the logic components or the logic gates and allows the blocs to be wired together (AND, OR, NAND, NOR, NOT etc.,) and it also includes the combinational functions and also the memory elements which may be of the formof the simple flip flops respectively. this is also used to implement the logic functions in the similar way to that of the ASIC performs. And also redesigning is also possible in this and is easy to implement wityh low cost comparing towards the others respectively and this offer an advantage for the purpose of the many applications. Some FPGA has the analog features includes the slew rate and drive strenth of each output pin. Another relatively common analog feature is differential comparators on input pins designed to be connected to differential signalingchannels. FPGA contains the analog to digital converter and also the digital to analog converter which is integrated in to the chip and allowing them to operate in the system on chip. Such devices blur the line between the filed programmable gate array and the field programmable analog array and these are internally fabricated in to the FPGA processor depending on the users choice. So therefore this FPGA contains the the logic devices starting from the programmable read only memory to the programmable logic devices. Programmable logic has a connection oriented in between the logic gates. .

Figure shows the expanded view of typical FPGA

FPGA is a collection of the configurable logic blocs that can be connected with an vast matrix inter connection and formed iin to a complex digital circuit. It is mainly used in the high speed digital applications where the design is given the more importance rather than the cost. With the rapid increse in the integration and the the decrease in the price may leading to the large usage of the FPGA in the market in this advanced technology. In the next digital revolutions FPGA (Field Programmable gate array) as well as the CPLD (Complex programmable logic device) are coming in to the existance for the dynamic development of the digital systems in the same way as the micro processors did in the embedded sytems.

The developers can design their circuits using either a diagramatic based techniques VHDL, Verilog or any combinationof these techniques depending up on their simplicity and their efficiency and in terms if the usage. Now a days FPGA are becoming staple in the modern designs. Sop therefore depending on the application of the usage the components and their necessary codes are dumped for the accurate usage of the FPGA (Field Programmable gate array). The diagramatic representation which are going to be used in the FPGA are should be converted either in to the VHDL or the verilog. Depending on the necessity and then used for the purpose of the compilation. Therefore the connectivity between the modules which are going to be embedd on the FPGA kit are depending on the either physical connectivity or the logical connectivity and are shown by the following diagrametic representation.

Figure shows the reprentation of the physical linking as well as the logical linking respectively

Therefore the connectivity starts or begins with an entity declaration in the VHDL (verilog hardware description language) or the verilog followed by the VHDL module or the verilog entity parameter.

Figure shows the connectivity between the modules

IN coming towards the digital design buses play an important role for the purpose of the connectivity of the nets in the digital design and these buses play an important role in managing the nets and display the design in a more readable form.

Figure shows the representation of the bus joiners

So in the recent technology these FPGA and with the help of the logic blocs are inter connected with the embedded micro processor to form a complete programmable chip.

CONCLUSION

Spike sorting is a very challenging mathematical problem that has attracted the attention of scientists from different fields. It is indeed an interesting problem for researchers working on signal processing, especially those dealing with pattern recognition and machine learning techniques. It is also crucial for neurophysiologists, since an optimal spike sorting can dramatically increase the number of identified neurons and may allow the study of very sparsely firing neurons, which are hard to find with basic sorting approaches.

Given the extraordinary capabilities of current recording systems – allowing the simultaneous recording from dozens or even hundreds of channels – there is an urgent need to develop and optimize methods to deal with the resulting massive amounts of data. The reliable identification of the activity of hundreds of simultaneously recorded neurons will play a major role in future developments in Neuroscience.

In this article we gave a brief description of how to tackle the main issues of spike sorting. However, there are still many open problems, like the sorting of overlapping spikes, the identification of bursting cells and of nearly silent neurons, the development of robust and completely unsupervised methods, how to deal with non-stationary conditions, for example, due to drifting of the electrodes, how to quantify the accuracy of spike sorting outcomes, how to automatically distinguish single-units from multi-units, etc.

One of the biggest problems for developing optimal spike sorting algorithms is that we usually do not have access to the “ground truth”. In other words, we do not have the exact information of how many neurons we are recordings from and which spike correspond to which neuron. The challenge is then to come up with realistic simulations and clever experiments -as the ones described in the previous section- that allow the exact quantification of performance and comparison of different spike sorting methods.