Skip to content
This repository was archived by the owner on Sep 17, 2019. It is now read-only.

Issue using JSON created field in statement. #151

Open
john-michaelburke opened this issue May 31, 2019 · 5 comments
Open

Issue using JSON created field in statement. #151

john-michaelburke opened this issue May 31, 2019 · 5 comments

Comments

@john-michaelburke
Copy link

Having this issue on 7.1.1. I have looked at the definition of this flag, event_as_json_keyword, and it looks like this will attempt to do what I have already done with the Json filter in logstash.

My issue: I decode jsons with logstash into a bunch of fields and then would like to send it off to a postgresdb. It seems as though it is not finding these newly generated fields. I have verified this is a valid json and is properly ingested into elasticsearch. As you can see I also am able to use these generated fields to create a new index which is then ingestable by the jdbc output:
mutate { add_field => [ "index_name", "%{[MTE][test_station]}" ] }

JDBC Output error message:

JDBC - Exception. Not retrying {:exception=>#<RuntimeError: Invalid Fi
eldReference: `%{[MTE][serial_number]}`>, :statement=>"IN

My logstash config:

input {
	beats {
		client_inactivity_timeout => 12000000
		port => 5044
	}
}
filter {
	mutate {
		gsub => [
			"message", "\n", "",
			"message", " ", ""
		]
	}

	json {
		source => "message"
	}
	mutate {
		copy => {"[MTE][timestamp]" => "timestamp"}
	}
	mutate {
		gsub => [
			"timestamp", "T", ""
		]
	}
	date {
		match => ["timestamp", "YYYYMMddHHmmssZ"]
		timezone => "UTC"
		target => "@timestamp"
	}
	mutate {
		remove_field => "timestamp"
	}
	mutate {
		add_field => [ "index_name", "%{[MTE][test_station]}" ]
	}
	mutate {
		lowercase => "index_name"
	}
}   

output {
	elasticsearch {
		hosts => "elasticsearch:9200"
    manage_template => true                 
		index => "%{[index_name]}"
    template => "/usr/share/logstash/logstash-template.json"
    template_name => "mte_dynamic_template"
    template_overwrite => true
	}
	jdbc {
		enable_event_as_json_keyword => true
		connection_string => 'jdbc:postgresql://postgres:5432/postgres'
		username => 'asdf'
		password => 'asdf'
		#driver_jar_path=>'/usr/share/logstash/postgres.jar'
		statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass) VALUES(CAST (? as timestamp), ?, ?, ?)",
		 							 "%{@timestamp}", "%{index_name}", "%{[MTE][serial_number]}" , "%{[MTE][test_pass]}"]
	}
	
}

@theangryangel
Copy link
Owner

theangryangel commented May 31, 2019 via email

@theangryangel
Copy link
Owner

theangryangel commented May 31, 2019

If the above doesnt help shout and I’ll try and reproduce the issue when I have a moment.

Please do make sure that the serial_number field is definitely in your event first though please (you have no idea how many times I’ve gone digging into something for someone only for the field to be literally not there). I’m not seeing it in the rest of your config, which means it must be coming from your original event.

@john-michaelburke
Copy link
Author

john-michaelburke commented May 31, 2019

Thank you for reaching out!! In stdout it appears to be a field albeit nested. I should say as long as I copy the nested fields into a new field, everything works fine. Ive added back in one of the nested fields
which I copy into the serial_number new field... You can see where it fails here, the new field works but it fails when i directly insert the nested field:
] JDBC - Exception. Not retrying {:exception=>#<RuntimeError: Invalid FieldReference: %{[MTE][serial_number]}>, :statement=>"INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass, error_code) VALUES(CAST (? as timestamp), ?, ?, ?, ?)",

input {
	beats {
		client_inactivity_timeout => 12000000
		port => 5044
	}
}
filter {
	mutate {
		gsub => [
			"message", "\n", "",
			"message", " ", ""
		]
	}
	json {
		source => "message"
	}
	mutate {
		copy => {"[MTE][timestamp]" => "timestamp"}
	}
	mutate {
		gsub => [
			"timestamp", "T", ""
		]
	}
	date {
		match => ["timestamp", "YYYYMMddHHmmssZ"]
		timezone => "UTC"
		target => "@timestamp"
	}
	mutate {
		remove_field => "timestamp"
	}
	mutate {
		add_field => [ "index_name", "%{[MTE][test_station]}" ]
	}
	mutate {
		lowercase => "index_name"
	}
	mutate {
		add_field => [ "serial_number", "%{[MTE][serial_number]}" ]
	}
	mutate {
		add_field => [ "test_pass", "%{[MTE][test_pass]}" ]
	}
	mutate {
		add_field => [ "error_code", "%{[MTE][error_code]}" ]
	}
}   
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
    manage_template => true                 
		index => "%{[index_name]}"
    template => "/usr/share/logstash/logstash-template.json"
    template_name => "mte_dynamic_template"
    template_overwrite => true
	}
	jdbc {
		connection_string => 'jdbc:postgresql://postgres:5432/postgres'
		username => 'asdf'
		password => 'asdf'
		statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass, error_code) VALUES(CAST (? as timestamp), ?, ?, ?, ?)",
		 							 "%{@timestamp}", "%{index_name}", "%{serial_number}" , "%{[MTE][serial_number]}", "%{error_code}"]
	}
	
}

I had the thought that maybe it was an issue with logstash so I tried putting the nested field into the elasticsearch output to see if id get a similar error. Although I did get an error it is due to the field not being lowercase:
logstash | [2019-05-31T20:58:37,154][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"1020038501S1918000172", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x12b9ae1a>], :response=>{"index"=>{"_index"=>"1020038501S1918000172", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [1020038501S1918000172], must be lowercase", "index_uuid"=>"_na_", "index"=>"1020038501S1918000172"}}}} logstash | /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-jdbc-5.4.0/lib/logstash/outputs/jdbc.rb:283: warning: constant ::Fixnum is deprecated

Here are the edits I made to my logstash conf:
`input {
	beats {
		client_inactivity_timeout => 12000000
		port => 5044
	}
}
filter {
	mutate {
		gsub => [
			"message", "\n", "",
			"message", " ", ""
		]
	}
	json {
		source => "message"
	}
	mutate {
		copy => {"[MTE][timestamp]" => "timestamp"}
	}
	mutate {
		gsub => [
			"timestamp", "T", ""
		]
	}
	date {
		match => ["timestamp", "YYYYMMddHHmmssZ"]
		timezone => "UTC"
		target => "@timestamp"
	}
	mutate {
		remove_field => "timestamp"
	}
	mutate {
		add_field => [ "index_name", "%{[MTE][test_station]}" ]
	}
	mutate {
		lowercase => "index_name"
	}
	mutate {
		add_field => [ "serial_number", "%{[MTE][serial_number]}" ]
	}
	mutate {
		add_field => [ "test_pass", "%{[MTE][test_pass]}" ]
	}
	mutate {
		add_field => [ "error_code", "%{[MTE][error_code]}" ]
	}
}   
output {
	elasticsearch {
		hosts => "elasticsearch:9200"
    manage_template => true                 
		index => "%{[MTE][serial_number]}"
    template => "/usr/share/logstash/logstash-template.json"
    template_name => "mte_dynamic_template"
    template_overwrite => true
	}
	jdbc {
		connection_string => 'jdbc:postgresql://postgres:5432/postgres'
		username => 'asdf'
		password => 'asdf'
		statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass, error_code) VALUES(CAST (? as timestamp), ?, ?, ?, ?)",
		 							 "%{@timestamp}", "%{index_name}", "%{serial_number}" , "%{test_pass}", "%{error_code}"]
	}	
}

As for now I can live with this by manually adding fields in the future but I feel like there might be something here. Let me know if youd like me to try anything else.

@theangryangel
Copy link
Owner

Can you add a redacted copy of an input event so i can have a play at some point this weekend, if I get a moment? Just to make sure I’ll definitely be using the same structure

@john-michaelburke
Copy link
Author

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants