Depends on what kinda of search engine you want to make / how deep into things you want to get.
Also, it depends largely on what you are searching. Are you searching HTML pages on the web? TXT documents on a local computer, PDFs... Images?
I'll assume we're talking HTML documents, the easy way it so take the content of the article and stick it in a text field in MySQL. Let someone search it with a LIKE '%" . $wordtheytypedin . "%' statement.
A better way, more accurate and all, is to take and strip teh HTML from every page. Get count of unique words on the page, and make each one of those words a record in a WORD table.
WORD -
ID, word, soundex ( if you wanted )
LINK_WORDS_AND_PAGES -
ID, word_ID, page_ID, rank
PAGES -
ID, page
Record all the pages in the PAGES table, and record links between the word and the page in the LINK_WORDS_AND_PAGES table, along with a 'rank', which is probably either the number of times a word occurs on the page, or the percentage of the time the word occurs on the page ( better )
Then when people search, first you get the word. They you go and get everything from LINK_WORDS_AND_PAGES WHERE word_ID = [whatever word id they chose]
Then get the pages from there, and rank it based on the sum of the ranks for all the words. Make sense?