Rsyslog logging multiple lines(exactly duplicate lines) under messages (Centos - Amazon AMI) - centos

I am using rsyslog v5 to centralize logs to a server, i see exactly duplicat elogs under /var/log/messages on my log server, although i do not see duplicate lines under distributed servers logs
I am using Amazon AMI-centos

I figuered it out - was monitoring 2 files under each server and was sending them via *.* ##<%= #log_servers %>:514 twice

Related

Aerospike: How to find from any aerospike server which clients are accessing it?

We had multiple clients configured to talk to this cluster of aerospike nodes. Now that we have removed the configuration from all the clients we are aware of, there are still some read/write requests coming to this cluster, as shown in the AMC.
I looked at the log file generated in /var/log/aerospike/aerospike.log, but could not get any information.
Update
The netstat command as mentioned in the answer by #kporter shows the number of connections, with statuses ESTABLISHED, TIME_WAIT, CLOSE_WAIT etc. But, that does not mean those connections are currently being used for get/set operations. How do I get the IPs from which aerospike operations are currently being done?
Update 2 (Solved)
As mentioned in the comments to #kporter's answer, a tcpdump command on the culprit client showed packets still being sent to the aerospike cluster which was no more referenced in the config file. This was happening while even AMC of that cluster did not show any more read/write TPS.
I later found that this stopped after doing a restart of the nginx service on the client. Please note that the config file in the client now references a new aerospike cluster and packets sent to that cluster did not stop after the nginx restart. This is weird but it worked.
Clients connect to Aerospike over port 3000:
The following command, when run on the server nodes, will show the addresses of hosts connecting to the server over port 3000.
netstat --tcp --numeric-ports | grep 3000

Airflow Remote logging not working

I have a up and running Apache - Airflow 1.8.1 instance.
I got a working connection (and it's ID) to write to Google Cloud Storage and my airflow user has the permission to write to the bucket.
I try to use the remote log storage functionality by adding
remote_base_log_folder = 'gs://my-bucket/log'
remote_log_conn_id = 'my_working_conn_id'
And that's all (I didn't touch any configuration but that)
I restarted all the services but the log aren't uploading to gcs (my bucket it's still empty) and my filesystem space is still decreasing.
Have you enabled successfully remote log with gcs? If yes, what did you change / do?
I manage to get the remote log to GCS. First, you need to give the service account permission to write to GCS bucket.
This is my GCP connection set up:
Then, edit the airflow.cfg file:
remote_base_log_folder = gs://my-backup/airflow_logs
remote_log_conn_id = my_gcp_conn
After editing the config file, you need to re-initialize it again:
airflow initdb
# start the web server, default port is 8080
airflow webserver -p 8080
Testing by turning on the "tutorial" DAG, you should be able to see the logs both locally and remotely in GCS:

Specific logging with rsyslog and ELK

I have an rsyslog server and ELK stack running on the same server.
Our application is forwarding logs to rsyslog and is forwarding it to localhost.
We now want to split up our logging (frontend and backend logging).
Our frontend dev has added a tag [frontend] that will be added to the message.
Is it possible to filter this out in rsyslog and forward this to another logstash while keeping the backend logging?
i have this in my configuration at the moment but it keeps forwarding all messages to that logstash:
*.* ##localhost:5555
:msg, contains, "\[frontend\]" stop
*.* ##localhost:5544
:programname, contains, "backend" ~
We are sending the frontend logs through the backend so program name 'backend' is in every message we receive
did some more research and found a working solution:
*.* {
:msg, contains, "\[frontend\]"
##localhost:5555
}
*.* {:programname, contains, "backend"
##localhost:5544
stop
}

Spark Multinode Cluster: Unable to open cluster info in web UI

I have created a multinode cluster in SPARK. I have added the IP address of both machines into /etc/hosts file on both machines. The commands are working fine, like start-master.sh and start-slave.sh.
Even zookeeper is showing who is the leader and who is the follower in the terminal, when running the command zkServer.sh status command. But when I am going to web UI using "slave1:8080" it is showing page not found.
Note- slave1 and slave2 are machine names.
What can be the issue with this?
Thanks in advance.

Where redirected rsyslog is saved to?

I have connected two of my computers to form a network, with only an ethernet cable between them. They are both Ubuntu 12.04, and can ping each other without a problem. For the logs, I want to forward IP address 10.0.0.1 and the want I want to send to is 10.0.0.2.
I wanted to redirect the logs via TCP, so in the client I added the following line to the /etc/rsyslog.conf file, as I read in many how to guides, as follows:
*.* ##10.0.0.2:514
Then in the machine with address 10.0.0.2, where I wanted the logs to be forwarded to I uncommented the lines below, again as I understand the correct configuration to be.
$ModLoad imtcp
$InputTCPServerRun 514
I can't see that I need to do anything else based on the guides I have read. I have restarted both machines, but I can't see anything in /var/log which suggests that another machines logs are being saved.
Where should they be being saved? Thanks for reading.
In the rsyslog configuration of 10.0.0.2, you should have some filters like :
*.* /var/log/syslog
See the Filter Conditions part of rsyslog documentation for more information.

Resources