Using free SearXNG service with Ollama

**SearXNG is a HUGE Advantage**

SearXNG is a self-hosted, metasearch engine. This means you don’t rely on external APIs with potential costs and
rate limits. It’s privacy-focused and very useful for this purpose. This significantly simplifies things.

**Here’s a plan using PHP and SearXNG**

1. **SearXNG Setup:** You already have this – confirm it’s working and accessible. Note the URL for your SearXNG
instance.

2. **PHP Script (search_and_query.php – Example):** This is the core logic.

“`php
<?php

// Configuration
$searxng_url = ‘YOUR_SEARXNG_URL’; // Replace with your SearXNG URL
$max_results = 5; // Maximum number of search results to include

// Get the user query
$query = $_GET[‘query’] ?? ”; // Get query from GET request (e.g., ?query=something)

if (empty($query)) {
echo “Please provide a query.”;
exit;
}

// Perform the SearXNG search
$url = $searxng_url . ‘?q=’ . urlencode($query) . ‘&format=json&results=’ . $max_results;

$json_response = file_get_contents($url);

if ($json_response === FALSE) {
echo “Error: Could not connect to SearXNG.”;
exit;
}

$search_results = json_decode($json_response, true);

if (isset($search_results[‘results’]) && is_array($search_results[‘results’])) {
$search_result_titles = [];
foreach ($search_results[‘results’] as $result) {
$search_result_titles[] = $result[‘title’];
}

// Construct the prompt for the LLM (adjust as needed)
$prompt = “Here are some relevant search results:\n”;
foreach ($search_result_titles as $title) {
$prompt .= “- ” . $title . “\n”;
}
$prompt .= “\nNow, answer the following question based on the provided context:\n” . $query;

// Here’s where you’d send the prompt to your LLM (e.g., using Ollama’s API)
// For demonstration, we’ll just echo the prompt
echo “Prompt to LLM:\n” . $prompt;

} else {
echo “No search results found.”;
}

?>
“`

**Explanation:**

* **Configuration:** You *must* replace `YOUR_SEARXNG_URL` with the actual URL of your SearXNG instance.
* **Gets User Query:** The script retrieves the search query from the URL using `$_GET[‘query’]`. You’ll
need to pass the query in the URL, like `?query=What is the capital of France?`.
* **SearXNG Query:** It constructs a URL to query SearXNG, including the query and formatting as JSON.
* **Gets Search Results:** It fetches the JSON response from SearXNG using `file_get_contents()`.
* **Extracts Titles:** It extracts the titles of the search results.
* **Constructs Prompt:** It creates a prompt to send to your LLM, including the search results. This is the
critical part for tailoring to your LLM’s format.
* **LLM Integration:** The `echo “Prompt to LLM:\n” . $prompt;` line is a placeholder. You will need to
replace this with the actual code to send the prompt to your LLM.

3. **Integration with your PHP/AJAX Workflow**

“`javascript
// Example using jQuery AJAX
$.ajax({
url: ‘search_and_query.php’,
type: ‘GET’,
data: { query: ‘Your Search Query’ }, // Pass the query
success: function(response) {
// Process the LLM’s response (which will be the prompt in this example)
console.log(response); // Or display it on the page
},
error: function(error) {
console.error(‘Error:’, error);
}
});
“`

**Key points and next steps:**

* **Replace placeholders:** Make sure you replace `YOUR_SEARXNG_URL` with your actual SearXNG URL and
implement the code to send the prompt to your LLM (Ollama, etc.).
* **Error handling:** Add more robust error handling.
* **Prompt Engineering:** Experiment with different prompt formats to get the best results from your LLM.
Consider including snippets of the full search result (not just the titles).
* **Ollama API:** Research how to call the Ollama API from PHP. You’re likely going to use `curl` or a
similar library.
* **Security:** Consider implementing input sanitization to prevent injection attacks.
* **Display the LLM Output:** In the jQuery AJAX example, `console.log(response);` is just a placeholder. You
need to write code to display the LLM’s actual response on your webpage, instead of just the prompt.

Using SearXNG simplifies things considerably. You don’t need a complicated API key and can leverage its
decentralized nature. This approach significantly reduces the complexity compared to dealing directly with
search APIs. It also provides you with a good base to get started with.

This is a starting point. The crucial part is integrating the prompt into your LLM. Focus on that first, and
then gradually refine the code for better results. Good luck!

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *