Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Duplicate Logs Sent to Output from SQLite Buffer #2757

Open
Meetp369 opened this issue Aug 2, 2024 · 0 comments
Open

Duplicate Logs Sent to Output from SQLite Buffer #2757

Meetp369 opened this issue Aug 2, 2024 · 0 comments
Labels
bug needs investigation It looks as though have all the information needed but investigation is required

Comments

@Meetp369
Copy link

Meetp369 commented Aug 2, 2024

I am currently using SQLite as a buffer and am experiencing an issue with redundant logs being sent after the output is up. Specifically, I've noticed the following behavior:

  • When there is only one entry present in the database file, only one log is sent to the output, which is expected.
  • However, when there are four entries present in the database file, more than four logs are sent to the output, resulting in duplicate logs.

This redundancy leads to a mismatch between the number of entries in the database file and the logs sent to the output.

Steps to Reproduce:

  1. Run below Go file that implements a mock server. This server will be utilized in the input section of http_client. Each time the http_client sends a request to this server, the server will respond with a log message.
package main
import (
	"encoding/json"
	"fmt"
	"log"
	"net/http"
)

var count = 1

// getLogEntry generates a unique log entry based on the value of count.
func getLogEntry() string {
	// Simulate a unique log entry for demonstration based on count
	if count%2 == 0 {
		return fmt.Sprintf("log 1")
	} else {
		return fmt.Sprintf("log 2")
	}
}

func logsHandler(w http.ResponseWriter, r *http.Request) {
	logs := getLogEntry()

	// Prepare and marshal the response
	responseJSON, err := json.Marshal(logs)
	if err != nil {
		http.Error(w, err.Error(), http.StatusInternalServerError)
		return
	}

	// Set Content-Type and write the response
	w.Header().Set("Content-Type", "application/json")

	log.Println(count)
	// Increment count for next request
	count++
	if _, err := w.Write(responseJSON); err != nil {
		http.Error(w, err.Error(), http.StatusInternalServerError)
		return
	}
}

func main() {
	http.HandleFunc("/log", logsHandler)

	fmt.Printf("Starting server on port :8081\n")
	if err := http.ListenAndServe(":8081", nil); err != nil {
		log.Fatalf("Could not start server: %s\n", err)
	}
}
  1. Used the following configuration. Since the output section of http_client is inactive because the server at http://127.0.0.1:9004/receive is not running, all logs are being stored in the SQLite database file (buffer).
input:
  broker:
    inputs:
      - http_client:
          url: "http://127.0.0.1:8081/log"
          verb: GET
          rate_limit: r1

rate_limit_resources:
  - label: r1
    local:
      count: 1
      interval: 5s
buffer:
  sqlite:
    path: /root/test/foo.db

pipeline:
  processors:
    - bloblang: |
        root = {
          "logs": this
        }

output:
  http_client:
      url: "http://127.0.0.1:9004/receive"
      verb: POST
  1. Stop the redpanda-connect service.
  2. Run again redpanda-connect service by below mentioned configuration with same SQLite database file (buffer) which was created in step 2.
input:
  broker:
    inputs:
      - http_client:
          url: "http://127.0.0.1:8081/log"
          verb: GET
          rate_limit: r1

rate_limit_resources:
  - label: r1
    local:
      count: 1
      interval: 5s
buffer:
  sqlite:
    path: /root/test/foo.db

pipeline:
  processors:
    - bloblang: |
        root = {
          "logs": this
        }

output:
  stdout:
    codec: lines
  1. The duplicate logs will be visible in the terminal or console where Benthos is running.
@mihaitodor mihaitodor added bug needs investigation It looks as though have all the information needed but investigation is required labels Aug 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug needs investigation It looks as though have all the information needed but investigation is required
Projects
None yet
Development

No branches or pull requests

2 participants