-
Notifications
You must be signed in to change notification settings - Fork 102
Issue using JSON created field in statement. #151
Comments
Enable event as Json doesn’t do what you think it does. You don’t seem to be using its intended purpose, so start off my removing that setting.
Second double check that the fields actually exist in your event. Use the stdout output to confirm you have the serial number field.
After that I’d also double check the nested syntax. When using the %{} notation this is being passed directly to the sprintf function that logstash provides on the event. I’m not able to double check from where I am tonight that it’s definitely correct.
…Sent from my iPhone
On 31 May 2019, at 19:11, John-Michael Burke ***@***.***> wrote:
Having this issue on 7.1.1. I have looked at the definition of this flag, event_as_json_keyword, and it looks like this will attempt to do what I have already done with the Json filter in logstash.
My issue: I decode jsons with logstash into a bunch of fields and then would like to send it off to a postgresdb. It seems as though it is not finding these newly generated fields. I have verified this is a valid json and is properly ingested into elasticsearch. As you can see I also am able to use these generated fields to create a new index which is then ingestable by the jdbc output:
mutate { add_field => [ "index_name", "%{[MTE][test_station]}" ] }
JDBC Output error message:
JDBC - Exception. Not retrying {:exception=>#<RuntimeError: Invalid Fi
eldReference: `%{[MTE][serial_number]}`>, :statement=>"IN
My logstash config:
input {
beats {
client_inactivity_timeout => 12000000
port => 5044
}
}
filter {
mutate {
gsub => [
"message", "\n", "",
"message", " ", ""
]
}
json {
source => "message"
}
mutate {
copy => {"[MTE][timestamp]" => "timestamp"}
}
mutate {
gsub => [
"timestamp", "T", ""
]
}
date {
match => ["timestamp", "YYYYMMddHHmmssZ"]
timezone => "UTC"
target => ***@***.***"
}
mutate {
remove_field => "timestamp"
}
mutate {
add_field => [ "index_name", "%{[MTE][test_station]}" ]
}
mutate {
lowercase => "index_name"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
manage_template => true
index => "%{[index_name]}"
template => "/usr/share/logstash/logstash-template.json"
template_name => "mte_dynamic_template"
template_overwrite => true
}
jdbc {
enable_event_as_json_keyword => true
connection_string => 'jdbc:postgresql://postgres:5432/postgres'
username => 'asdf'
password => 'asdf'
#driver_jar_path=>'/usr/share/logstash/postgres.jar'
statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass) VALUES(CAST (? as timestamp), ?, ?, ?)",
***@***.***}", "%{index_name}", "%{[MTE][serial_number]}" , "%{[MTE][test_pass]}"]
}
}
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
|
If the above doesnt help shout and I’ll try and reproduce the issue when I have a moment. Please do make sure that the serial_number field is definitely in your event first though please (you have no idea how many times I’ve gone digging into something for someone only for the field to be literally not there). I’m not seeing it in the rest of your config, which means it must be coming from your original event. |
Thank you for reaching out!! In stdout it appears to be a field albeit nested. I should say as long as I copy the nested fields into a new field, everything works fine. Ive added back in one of the nested fields
I had the thought that maybe it was an issue with logstash so I tried putting the nested field into the elasticsearch output to see if id get a similar error. Although I did get an error it is due to the field not being lowercase:
As for now I can live with this by manually adding fields in the future but I feel like there might be something here. Let me know if youd like me to try anything else. |
Can you add a redacted copy of an input event so i can have a play at some point this weekend, if I get a moment? Just to make sure I’ll definitely be using the same structure |
Having this issue on 7.1.1. I have looked at the definition of this flag, event_as_json_keyword, and it looks like this will attempt to do what I have already done with the Json filter in logstash.
My issue: I decode jsons with logstash into a bunch of fields and then would like to send it off to a postgresdb. It seems as though it is not finding these newly generated fields. I have verified this is a valid json and is properly ingested into elasticsearch. As you can see I also am able to use these generated fields to create a new index which is then ingestable by the jdbc output:
mutate { add_field => [ "index_name", "%{[MTE][test_station]}" ] }
JDBC Output error message:
My logstash config:
The text was updated successfully, but these errors were encountered: