<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<!--Converted with LaTeX2HTML .95.3 (Nov 17 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds -->
<HTML>
<HEAD>
<TITLE>Abstract</TITLE>
<META NAME="description" CONTENT="Abstract">
<META NAME="keywords" CONTENT="visual_info">
<META NAME="resource-type" CONTENT="document">
<META NAME="distribution" CONTENT="global">
<LINK REL=STYLESHEET HREF="visual_info.css">
</HEAD>
<BODY LANG="EN">
 <BR> <HR><A NAME="tex2html20" HREF="node2.html"><IMG ALIGN=BOTTOM ALT="next" SRC="http://www.ecs.soton.ac.uk/l2h-icons//next_motif.gif"></A>   <A NAME="tex2html18" HREF="index.html"><IMG ALIGN=BOTTOM ALT="up" SRC="http://www.ecs.soton.ac.uk/l2h-icons//up_motif.gif"></A>   <A NAME="tex2html12" HREF="index.html"><IMG ALIGN=BOTTOM ALT="previous" SRC="http://www.ecs.soton.ac.uk/l2h-icons//previous_motif.gif"></A>         <BR>
<B> Next:</B> <A NAME="tex2html21" HREF="node2.html">References</A>
<B>Up:</B> <A NAME="tex2html19" HREF="index.html">Complex Texture Classification with </A>
<B> Previous:</B> <A NAME="tex2html13" HREF="index.html">Complex Texture Classification with </A>
<BR> <HR> <P>
<H1><A NAME="SECTION00010000000000000000">Abstract</A></H1>
<P>
We introduce a novel texture description scheme and
demonstrate it with our fast similarity search technique for content-based
retrieval and navigation applications. The texture representation uses a
combination of edge and region statistics. It is compared with the Multi-Resolution
Simultaneous Auto-Regressive Model and Statistical Geometrical Features
techniques using the entire Brodatz texture set and on a collection
of more complex texture images obtained from a product catalogue. In both cases,
the edge based representation gives the best classification.
<P>
Introduction
Texture analysis has been widely studied, and a large number of approaches
have been developed. Amongst these methods, some of the popular models include
Markov Random Field (MRF) [<A HREF="node2.html#Cross83">6</A>], 
Simultaneous Auto-Regressive (SAR) [<A HREF="node2.html#Mao92">15</A>], Gabor
Filters [<A HREF="node2.html#Coggins85">5</A>], Wold Transform [<A HREF="node2.html#Picard95">13</A>], and Wavelets. 
Texture analysis techniques may be described
in three main categories: structural, statistical, and structural
statistical approaches. The structural methods use the geometrical features
of texture primitives as the texture features. However, they involve
a lot of image pre-processing procedures to extract texture primitives.
Mostly these methods are time-consuming, and often only regular textures
can be recognized but the features are normally rotation-invariant. 
Statistical methods are the dominant approach for texture matching, and they can
work well on regular, random and quasi-random textures. Not many researchers have
developed texture analysis techniques using combined statistical structural methods.
Chen et al. [<A HREF="node2.html#Chen95">4</A>] use the statistical geometrical features to classify
textures from the entire Brodatz texture album [<A HREF="node2.html#Brodatz66">3</A>], and the method 
shows a good performance for classification.
<P>
Several studies [<A HREF="node2.html#Bovick90">2</A>] [<A HREF="node2.html#Coggins85">5</A>] [<A HREF="node2.html#Davis79">7</A>] [<A HREF="node2.html#Tamura78">18</A>] 
[<A HREF="node2.html#Patel92">16</A>] have shown that 
using edge information in the texture features can achieve good classification
performance. In this paper, we propose a novel edge based method which achieves a high
classification rate with the entire Brodatz texture database. One of our
objectives was to develop a texture matching technique which is effective with
complex textures such as those in commercial furniture catalogues (see figure
<A HREF="node1.html#figcomplicated">3</A>). We have compared our edge based method, the Statistical 
Geometrical Features (SGF) [<A HREF="node2.html#Chen95">4</A>] and the
Multi-Resolution Auto-Regressive Model (MR-SAR) [<A HREF="node2.html#Mao92">15</A>] using the entire Brodatz texture
database and a set of complex textures from the catalogue. In both cases, the new
Edge Based method gives the best retrieval results.
<P>
Edge Based Texture Classification <A NAME="sectedge">&#160;</A>
Various methods have been developed by researchers to extract edge information 
from a texture image. These include Gabor filters by Coggins et al.
[<A HREF="node2.html#Coggins85">5</A>] and Generalized Co-ocurrence matrices by Davis et al. [<A HREF="node2.html#Davis79">7</A>]. 
Patel et al. [<A HREF="node2.html#Patel92">16</A>] calculate edge direction using 3 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline393" SRC="img2.gif"  > 3 masks then use rank
order statistics to produce the texture features. Our approach to Edge Based texture
feature calculation begins like that of Patel et al. but where they provide only edge
information, our method also captures details of regions with no edge information, as these
too can contribute valuable information to the texture features. We also introduce other
low and high level texture measures as described below.
<P>
We calculate grey value variances of 4 different 
directions (0 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline395" SRC="img3.gif"  > , 45 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline397" SRC="img4.gif"  > , 90 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline399" SRC="img5.gif"  > , 135 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline401" SRC="img6.gif"  > ) from a 3 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline403" SRC="img7.gif"  > 3 mask. The direction with the
minimum variance is chosen as the label on the centre pixel of the mask.
However, some areas in an image may have no edge information, and these
can also be
used as part of the texture features. Before the direction from a mask
is determined, we must decide whether there is any edge information inside
the mask. To do this we calculate the sum of differences in each 3 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline405" SRC="img8.gif"  > 3 window.
<P>
<P> <IMG WIDTH=386 HEIGHT=50 ALIGN=BOTTOM ALT="displaymath391" SRC="img9.gif"  > <P>
where  <IMG WIDTH=8 HEIGHT=14 ALIGN=MIDDLE ALT="tex2html_wrap_inline407" SRC="img10.gif"  >  is the mean grey level value of the entire window,  <IMG WIDTH=55 HEIGHT=18 ALIGN=MIDDLE ALT="tex2html_wrap_inline409" SRC="img11.gif"  > ,
 and  <IMG WIDTH=41 HEIGHT=23 ALIGN=MIDDLE ALT="tex2html_wrap_inline411" SRC="img12.gif"  >  is the grey level value of pixel  <IMG WIDTH=31 HEIGHT=23 ALIGN=MIDDLE ALT="tex2html_wrap_inline413" SRC="img13.gif"  > . <BR>
<P>
When the direction is decided, then the labelling is performed as 0 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline415" SRC="img14.gif"  >  - 
Horizontal (<i>H</i>), 45 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline419" SRC="img15.gif"  >  - Right Diagonal (<i>RD</i>), 90 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline423" SRC="img16.gif"  >  - Vertical
(<i>V</i>), and 135 <IMG WIDTH=4 HEIGHT=5 ALIGN=BOTTOM ALT="tex2html_wrap_inline427" SRC="img17.gif"  >  -
Left Diagonal (<i>LD</i>).
<P>
Figure <A HREF="node1.html#figedgeeffect">1</A>a shows an original texture image, and figure 
<A HREF="node1.html#figedgeeffect">1</A>b shows that accurate edge retrieval is accomplished.
The light areas in <A HREF="node1.html#figedgeeffect">1</A>b indicate a blank label and the darker areas are 
labelled with a direction.
<P>
<P><A NAME="69">&#160;</A><A NAME="figedgeeffect">&#160;</A> <IMG WIDTH=324 HEIGHT=175 ALIGN=BOTTOM ALT="figure63" SRC="img18.gif"  > <BR>
<STRONG>Figure 1:</STRONG> Effect of edge extraction from an image<BR>
<P>
<P>
After processing the entire image, the ratio for each 
edge direction and plain region is calculated as a fraction of the total number of labels:
<P><A NAME="eqedges">&#160;</A> <IMG WIDTH=500 HEIGHT=76 ALIGN=BOTTOM ALT="eqnarray72" SRC="img19.gif"  > <P>
where <i>D</i> is one of the labels Horizontal, Vertical, Right Diagonal, Left
Diagonal, and Blank.  <IMG WIDTH=21 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline433" SRC="img20.gif"  >  is the ratio of label <i>D</i> that appears in the image.  <IMG WIDTH=19 HEIGHT=14 ALIGN=MIDDLE ALT="tex2html_wrap_inline437" SRC="img21.gif"  >  is 
the number of appearances of <i>D</i> in the image.
<P>
Edge information alone is not sufficient for a complete description of the texture; 
the contrast across each edge can be relevant. If an edge direction of a
mask is found, the contrast ( <IMG WIDTH=39 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline441" SRC="img22.gif"  > ) of that direction, <i>D</i> (excluding Blank),  
within the mask is calculated as:
<P>
<P> <IMG WIDTH=500 HEIGHT=16 ALIGN=BOTTOM ALT="equation80" SRC="img23.gif"  > <P>
<P>
where  <IMG WIDTH=18 HEIGHT=14 ALIGN=MIDDLE ALT="tex2html_wrap_inline445" SRC="img24.gif"  >  and  <IMG WIDTH=19 HEIGHT=14 ALIGN=MIDDLE ALT="tex2html_wrap_inline447" SRC="img25.gif"  >  are the mean grey levels of the pixels on either side
of the determined direction. If the mask is classified as Blank, 
then the mean grey level value is computed instead.
<P>
When  <IMG WIDTH=39 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline449" SRC="img26.gif"  >  is summed up through an entire image, then equation
<A HREF="node1.html#eqcontrast_normalize">3</A> is applied to normalize into the range [0-1] by dividing
by the maximum possible greylevel value, e.g. 255. 
The values of  <IMG WIDTH=21 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline451" SRC="img27.gif"  >  and  <IMG WIDTH=39 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline453" SRC="img28.gif"  >  can be regarded as lower level
features.
<P>
<P><A NAME="eqcontrast_normalize">&#160;</A> <IMG WIDTH=500 HEIGHT=37 ALIGN=BOTTOM ALT="equation83" SRC="img29.gif"  > <P>
where  <IMG WIDTH=31 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline455" SRC="img30.gif"  >  is the maximum intensity level.
<P>
The higher level texture edge features are evaluated by using the
conditional probability between the edge direction of the centre of the  <IMG WIDTH=33 HEIGHT=20 ALIGN=MIDDLE ALT="tex2html_wrap_inline457" SRC="img31.gif"  > 
mask and the
surrounding locations. Figure <A HREF="node1.html#figecm">2</A> shows a matrix of conditional probabilities
which has a similar form to the Generalized Co-occurrence Matrix suggested by Davis et
al. [<A HREF="node2.html#Davis79">7</A>]. The differences are the use of conditional probabilities and the inclusion of
plain region statistics.
<P><A NAME="93">&#160;</A><A NAME="figecm">&#160;</A> <IMG WIDTH=300 HEIGHT=298 ALIGN=BOTTOM ALT="figure91" SRC="img32.gif"  > <BR>
<STRONG>Figure 2:</STRONG> Conditional Probability Matrix of Edge Information<BR>
<P>
For example,  <IMG WIDTH=52 HEIGHT=24 ALIGN=MIDDLE ALT="tex2html_wrap_inline459" SRC="img33.gif"  >  is the conditional probability of getting Vertical labels in a 3 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline461" SRC="img34.gif"  > 3 window
given the central pixel is Horizontal. This is computed by counting the number of
appearances of vertical edge labels in the surrounding location divided by the area
of mask (excluding the centre pixel). Each entry in the conditional probability matrix
is accumulated according to the labels of central and surrounding location in the
mask.
A 5 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline463" SRC="img35.gif"  > 5 probability matrix is then generated and normalized into the range [0-1] 
by dividing  <IMG WIDTH=42 HEIGHT=24 ALIGN=MIDDLE ALT="tex2html_wrap_inline465" SRC="img36.gif"  >  with  <IMG WIDTH=19 HEIGHT=14 ALIGN=MIDDLE ALT="tex2html_wrap_inline467" SRC="img37.gif"  > .
<P>
Similarity Measurement
In [<A HREF="node2.html#Mao92">15</A>], Mao et al. reported that using a large number of parameters will
cause an effect on severe averaging over power discriminatory features.
When comparing textures, contributions to the similarity measure from edge information
should be weighted according to the fractions of those edges occuring in the images, i.e.
by the ratios,  <IMG WIDTH=21 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline469" SRC="img38.gif"  > . If two images have a high ratio on horizontal
edges, their similarity value on horizontal edges should also increase. 
If two images have high and low ratios on edge properties,
then the weight is taken as the average between both ratios. In this case, we
can match two images based on their similarity and dissimilarity.
The weight  <IMG WIDTH=21 HEIGHT=14 ALIGN=MIDDLE ALT="tex2html_wrap_inline471" SRC="img39.gif"  >  is evaluated as:
<P> <IMG WIDTH=500 HEIGHT=17 ALIGN=BOTTOM ALT="equation98" SRC="img40.gif"  > <P>
where  <IMG WIDTH=21 HEIGHT=22 ALIGN=MIDDLE ALT="tex2html_wrap_inline473" SRC="img41.gif"  >  and  <IMG WIDTH=21 HEIGHT=24 ALIGN=MIDDLE ALT="tex2html_wrap_inline475" SRC="img42.gif"  >  are the ratios of the general term <i>D</i> of the two 
different images. 
The similarity, <i>s</i>, is the sum of the squared Euclidean distances of the
conditional probabilities and the contrasts, multiplied with the weights.
<P> <IMG WIDTH=500 HEIGHT=35 ALIGN=BOTTOM ALT="equation100" SRC="img43.gif"  > <P>
<P> <IMG WIDTH=500 HEIGHT=43 ALIGN=BOTTOM ALT="eqnarray103" SRC="img44.gif"  > <P>
where <i>D</i> and  <IMG WIDTH=12 HEIGHT=11 ALIGN=BOTTOM ALT="tex2html_wrap_inline483" SRC="img45.gif"  >  are the general terms of <i>H, V, LD, RD</i>, and <i>B</i>.
This measure uses a weighted combination of contrast across edges and
conditional probability of edge directions and plain regions to assess the
similarity between two images.
<P>
Brodatz Texture Database
Each Brodatz image is digitized into a 512 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline489" SRC="img46.gif"  > 512 256 grey level image, and
cut into 16 subimages of 128 <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline491" SRC="img47.gif"  > 128. A total of 1792 (112  <IMG WIDTH=8 HEIGHT=16 ALIGN=MIDDLE ALT="tex2html_wrap_inline493" SRC="img48.gif"  >  16) images 
are produced from the texture album. Eight out of each set of 16 subimages are 
randomly taken, texture features are extracted and pre-indexed into a training database.
The rest of the subimages are used for testing. A similar experiment was also performed
by Manjunath et al. [<A HREF="node2.html#Manjunath96">14</A>]. They used a nearly complete set of Brodatz
textures (except D31, D32, and D99) for comparing between Gabor wavelets, MRSAR,
pyramid-structured and tree-structured wavelet transform. The MRSAR result was
fractionally lower than the Gabor wavelets with 73% and 74% respectively.
<P>
Complicated Texture Database
Thirty texture patterns (11 classes) were extracted from a commercial furniture
catalogue. Figures <A HREF="node1.html#figcomplicated">3</A>a - <A HREF="node1.html#figcomplicated">3</A>d show some of the 
patterns which are categorised into groups such as, Abbey Stripe, Georgia Damask,
Tournament Stripe, etc. Some of the images were taken directly from texture samples
in the catalogue, others were extracted from the pictures of furniture.
<P>
<P><A NAME="122">&#160;</A><A NAME="figcomplicated">&#160;</A> <IMG WIDTH=324 HEIGHT=357 ALIGN=BOTTOM ALT="figure114" SRC="img49.gif"  > <BR>
<STRONG>Figure 3:</STRONG> Examples of commercial complicated texture patterns<BR>
<P>
<P>
Results
Two different techniques with good matching capabilities, SGF [<A HREF="node2.html#Chen95">4</A>]
and MRSAR [<A HREF="node2.html#Mao92">15</A>], are chosen to compare with our method.
For each test image, we calculate the Euclidean distance between
feature vectors to
retrieve the top 15 nearest matches out of the 896 features in the image database.
If all the 8 subimages from the same orginal texture image are retrieved, the 
testing image scores a 100%
retrieval rate. If 7 subimages are retrieved, the result is 87.5% and so on.
This experiment is repeated three times and the results are averaged to
even the random selection of the test set and sample sets in the Brodatz database.
For all the 8 test images of each class, we averaged the classification 
rate. The
number of Brodatz textures that scored a classification rate in a certain
percentage range are presented in table <A HREF="node1.html#tableBrodatzresult">1</A>. Our 
method clearly outperforms the other two methods in that more than half 
of the entire Brodatz textures have an accuracy between 90% - 100%. 
Our method also shows that an 83% correct classification rate is
obtained on average over all the Brodatz textures compared to 75.5% and 71.4%
achieved by MRSAR and SGF respectively.
<P>
<P><A NAME="131">&#160;</A><P><A NAME="figcbr">&#160;</A> <IMG WIDTH=825 HEIGHT=524 ALIGN=BOTTOM ALT="figure129" SRC="img50.gif"  > <P><BR>
<STRONG>:</STRONG> Content Based Retrieval from a lace texture<BR>
<P>
<P>
Some textures have very low classification success with all matching techniques tested;
for examples D43, D44, and D58. This is due to a high inhomogoenity pattern spread
over the whole original uncropped image. A preliminary manual examination of other textures
for which the very best matches are not from the parent image suggests that the best matches
 are visually similar to the
query textures. For example, some of the nearest matches of D54 subimage query are classified
to D05, and their
appearances are visually similar. 
Further investigation of the quality of the result order is in progress.
<P>
<P><A NAME="138">&#160;</A><A NAME="tableBrodatzresult">&#160;</A> <IMG WIDTH=395 HEIGHT=145 ALIGN=BOTTOM ALT="table134" SRC="img51.gif"  > <BR>
<STRONG>Table 1:</STRONG> The classification results of all the 112 Brodatz textures<BR>
<P>
<P>
For the complex textures experiments, we made the number of nearest matches
considered for a certain class equal to  double the number of images in the class
since each class has different numbers of samples. The entries in table 
<A HREF="node1.html#tablecomplexresult">2</A> represent the number of classes which score in each accuracy 
percentage range. Although these numbers are very low in this particular test, they
also suggest that the Edge Based method is performing better than the other two
approaches.
<P>
<P><A NAME="146">&#160;</A><A NAME="tablecomplexresult">&#160;</A> <IMG WIDTH=395 HEIGHT=186 ALIGN=BOTTOM ALT="table142" SRC="img52.gif"  > <BR>
<STRONG>Table 2:</STRONG> The classification result of complex textures<BR>
<P>
<P>
<P><A NAME="151">&#160;</A><P><A NAME="figcbn">&#160;</A> <IMG WIDTH=675 HEIGHT=644 ALIGN=BOTTOM ALT="figure149" SRC="img53.gif"  > <P><BR>
<STRONG>:</STRONG> Content Based Navigation from a commercial furniture complex pattern<BR>
<P>
<P>
Content-based Retrieval and Navigation
A hypermedia package [<A HREF="node2.html#Lewis97">12</A>], Multimedia Architecture for Video, Image and Sound (MAVIS), is being
developed at the University of Southampton which is capable of content based retrieval and 
navigation for non-text media. In this section, the edge based texture
classification is demonstrated with MAVIS for retrieval of similar complex furniture
textures and navigation to different media using links based on texture matching.
<P>
To index multidimensional image features, the R-tree [<A HREF="node2.html#Guttman84">9</A>] is one of
the popular choices and Beckmann et al. [<A HREF="node2.html#Beckmann90">1</A>] developed the R*-tree which improved
the space utilization compared against R-tree. In other popular content-based
retrieval applications such as QBIC [<A HREF="node2.html#Flickner95">8</A>], the R*-tree is also used 
for indexing image features.
<P>
For efficient content-based retrieval, we use the Hilbert R-tree [<A HREF="node2.html#Kamel93">10</A>] for 
fast multi-dimensional indexing and retrieval. It has been shown that it
outperforms the R*-tree. We have compared the
performance between Hilbert R-tree and the R*-tree with an image database. The
results showed that using Hilbert R-tree accesses significantly fewer nodes, has
easy implementation, and is much less time consuming for building an indexing
tree. We have experimented with <i>k</i> nearest neighbour queries 
for R-tree by Roussopoulos et al. [<A HREF="node2.html#Roussopoulos95">17</A>] which gives a good performance. However,
it is possible to improve on the <i>k</i> nearest neighbours search for a clustered image
database. 
In [<A HREF="node2.html#Kuan97">11</A>], we showed that less data comparison can be achieved than with
Roussopoulos et al. <i>k</i> nearest neighbour search on image features data and less
computation can be obtained for faster retrieval.
<P>
We use a subset of features (lowlevel: Ratio and Contrast)
indexed by a Hilbert R-tree, and enlarged <i>k+t</i> nearest neighbours are searched with
normal Euclidean distance measurement as the similarity measure. Then the weighted 
Euclidean distance measurement is performed among these <i>k+t</i> retrievals using the
full feature vectors described above. With <i>t =</i> 40, the accuracy of classifying
all Brodatz textures drops down to around 3% compared against sequential normal <i>k</i>
nearest neighbours search with full features set.
<P>
Figure <A HREF="node1.html#figcbr">4</A> shows content-based retreival of a Brodatz lace texture (D40), and
the results show a high accuracy retrieval with 1792 images stored in a
database. All the 15 subimages are retrieved in the top 20 nearest matches.
<P>
In the MAVIS system it is possible to author generic links [<A HREF="node2.html#Lewis97">12</A>] from images
to other parts of the information space using texture as the key. Once authored, the
link may be followed from similar instances of the texture. In the next example,
generic links have been authored from a texture patch to an image of a sofa with a
similar texture. Also a link has been authored to some text describing the texture.
Figure <A HREF="node1.html#figcbn">5</A> shows an application using a similar commercial pattern
(Tournament Stripe) to navigate to other information with
related content. An ordered list of links is displayed in the <EM>Image links</EM> window
with their iconic images. The related text information of the furniture pattern is shown
in the <EM>txt</EM> window
when the text media link (in <EM>Image links</EM> window) is selected and the <EM>Follow link</EM>
button is clicked. A sofa image (in <EM>mavis_img_viewer</EM> window) with the same furniture 
pattern is located with similar actions.
<P>
Conclusion
<P>
A new texture classification technique has been proposed which uses edge and plain region
information to characterize a texture. The texture method has been compared to MRSAR and SGF with the
entire Brodatz texture database. The results show that our
method outperforms
the other two methods; more than half of the entire texture database is 
matched with 90%-100% reliability. On average, our method achieves at least 83% matching accuracy over all the Brodatz textures. With complex
commerical textures, our method also gives a better classification rate.
We demonstrate content based
retrieval and navigation with the edge based texture scheme which provides an accurate 
method for content-based multimedia applications. Currently, our method is not rotation and
scale invariant but modifications to include rotation invariance are in progress.
<BR><BR>
<BR> <HR><A NAME="tex2html20" HREF="node2.html"><IMG ALIGN=BOTTOM ALT="next" SRC="http://www.ecs.soton.ac.uk/l2h-icons//next_motif.gif"></A>   <A NAME="tex2html18" HREF="index.html"><IMG ALIGN=BOTTOM ALT="up" SRC="http://www.ecs.soton.ac.uk/l2h-icons//up_motif.gif"></A>   <A NAME="tex2html12" HREF="index.html"><IMG ALIGN=BOTTOM ALT="previous" SRC="http://www.ecs.soton.ac.uk/l2h-icons//previous_motif.gif"></A>         <BR>
<B> Next:</B> <A NAME="tex2html21" HREF="node2.html">References</A>
<B>Up:</B> <A NAME="tex2html19" HREF="index.html">Complex Texture Classification with </A>
<B> Previous:</B> <A NAME="tex2html13" HREF="index.html">Complex Texture Classification with </A>
<BR> <HR> <P>
<BR> <HR>
<P><ADDRESS>
<I>Joseph Kuan <BR>
Wed Jun  3 14:01:57 BST 1998</I>
</ADDRESS>
</BODY>
</HTML>
